Amazon Lookout for Equipment

2021/04/08 - Amazon Lookout for Equipment - 22 new api methods

Changes  This release introduces support for Amazon Lookout for Equipment.

StartDataIngestionJob (new) Link ¶

Starts a data ingestion job. Amazon Lookout for Equipment returns the job status.

See also: AWS API Documentation

Request Syntax

client.start_data_ingestion_job(
    DatasetName='string',
    IngestionInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    RoleArn='string',
    ClientToken='string'
)
type DatasetName

string

param DatasetName

[REQUIRED]

The name of the dataset being used by the data ingestion job.

type IngestionInputConfiguration

dict

param IngestionInputConfiguration

[REQUIRED]

Specifies information for the input data for the data ingestion job, including dataset S3 location.

  • S3InputConfiguration (dict) -- [REQUIRED]

    The location information for the S3 bucket used for input data for the data ingestion.

    • Bucket (string) -- [REQUIRED]

      The name of the S3 bucket used for the input data for the data ingestion.

    • Prefix (string) --

      The prefix for the S3 location being used for the input data for the data ingestion.

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of a role with permission to access the data source for the data ingestion job.

type ClientToken

string

param ClientToken

[REQUIRED]

A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

This field is autopopulated if not provided.

rtype

dict

returns

Response Syntax

{
    'JobId': 'string',
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
}

Response Structure

  • (dict) --

    • JobId (string) --

      Indicates the job ID of the data ingestion job.

    • Status (string) --

      Indicates the status of the StartDataIngestionJob operation.

DescribeInferenceScheduler (new) Link ¶

Specifies information about the inference scheduler being used, including name, model, status, and associated metadata

See also: AWS API Documentation

Request Syntax

client.describe_inference_scheduler(
    InferenceSchedulerName='string'
)
type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler being described.

rtype

dict

returns

Response Syntax

{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED',
    'DataDelayOffsetInMinutes': 123,
    'DataUploadFrequency': 'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    'CreatedAt': datetime(2015, 1, 1),
    'UpdatedAt': datetime(2015, 1, 1),
    'DataInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    'DataOutputConfiguration': {
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    'RoleArn': 'string',
    'ServerSideKmsKeyId': 'string'
}

Response Structure

  • (dict) --

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model of the inference scheduler being described.

    • ModelName (string) --

      The name of the ML model of the inference scheduler being described.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being described.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being described.

    • Status (string) --

      Indicates the status of the inference scheduler.

    • DataDelayOffsetInMinutes (integer) --

      A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

    • DataUploadFrequency (string) --

      Specifies how often data is uploaded to the source S3 bucket for the input data. This value is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

    • CreatedAt (datetime) --

      Specifies the time at which the inference scheduler was created.

    • UpdatedAt (datetime) --

      Specifies the time at which the inference scheduler was last updated, if it was.

    • DataInputConfiguration (dict) --

      Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

      • S3InputConfiguration (dict) --

        Specifies configuration information for the input data for the inference, including S3 location of input data..

        • Bucket (string) --

          The bucket containing the input dataset for the inference.

        • Prefix (string) --

          The prefix for the S3 bucket used for the input data for the inference.

      • InputTimeZoneOffset (string) --

        Indicates the difference between your time zone and Greenwich Mean Time (GMT).

      • InferenceInputNameConfiguration (dict) --

        > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

        • TimestampFormat (string) --

          The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

        • ComponentTimestampDelimiter (string) --

          Indicates the delimiter character used between items in the data.

    • DataOutputConfiguration (dict) --

      Specifies information for the output results for the inference scheduler, including the output S3 location.

      • S3OutputConfiguration (dict) --

        Specifies configuration information for the output results from for the inference, output S3 location.

        • Bucket (string) --

          The bucket containing the output results from the inference

        • Prefix (string) --

          The prefix for the S3 bucket used for the output results from the inference.

      • KmsKeyId (string) --

        The ID number for the AWS KMS key used to encrypt the inference output.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of a role with permission to access the data source for the inference scheduler being described.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment.

CreateModel (new) Link ¶

Creates an ML model for data inference.

A machine-learning (ML) model is a mathematical model that finds patterns in your data. In Amazon Lookout for Equipment, the model learns the patterns of normal behavior and detects abnormal behavior that could be potential equipment failure (or maintenance events). The models are made by analyzing normal data and abnormalities in machine behavior that have already occurred.

Your model is trained using a portion of the data from your dataset and uses that data to learn patterns of normal behavior and abnormal patterns that lead to equipment failure. Another portion of the data is used to evaluate the model's accuracy.

See also: AWS API Documentation

Request Syntax

client.create_model(
    ModelName='string',
    DatasetName='string',
    DatasetSchema={
        'InlineDataSchema': 'string'
    },
    LabelsInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    ClientToken='string',
    TrainingDataStartTime=datetime(2015, 1, 1),
    TrainingDataEndTime=datetime(2015, 1, 1),
    EvaluationDataStartTime=datetime(2015, 1, 1),
    EvaluationDataEndTime=datetime(2015, 1, 1),
    RoleArn='string',
    DataPreProcessingConfiguration={
        'TargetSamplingRate': 'PT1S'|'PT5S'|'PT10S'|'PT15S'|'PT30S'|'PT1M'|'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
    },
    ServerSideKmsKeyId='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type ModelName

string

param ModelName

[REQUIRED]

The name for the ML model to be created.

type DatasetName

string

param DatasetName

[REQUIRED]

The name of the dataset for the ML model being created.

type DatasetSchema

dict

param DatasetSchema

The data schema for the ML model being created.

  • InlineDataSchema (string) --

type LabelsInputConfiguration

dict

param LabelsInputConfiguration

The input configuration for the labels being used for the ML model that's being created.

  • S3InputConfiguration (dict) -- [REQUIRED]

    Contains location information for the S3 location being used for label data.

    • Bucket (string) -- [REQUIRED]

      The name of the S3 bucket holding the label data.

    • Prefix (string) --

      The prefix for the S3 bucket used for the label data.

type ClientToken

string

param ClientToken

[REQUIRED]

A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

This field is autopopulated if not provided.

type TrainingDataStartTime

datetime

param TrainingDataStartTime

Indicates the time reference in the dataset that should be used to begin the subset of training data for the ML model.

type TrainingDataEndTime

datetime

param TrainingDataEndTime

Indicates the time reference in the dataset that should be used to end the subset of training data for the ML model.

type EvaluationDataStartTime

datetime

param EvaluationDataStartTime

Indicates the time reference in the dataset that should be used to begin the subset of evaluation data for the ML model.

type EvaluationDataEndTime

datetime

param EvaluationDataEndTime

Indicates the time reference in the dataset that should be used to end the subset of evaluation data for the ML model.

type RoleArn

string

param RoleArn

The Amazon Resource Name (ARN) of a role with permission to access the data source being used to create the ML model.

type DataPreProcessingConfiguration

dict

param DataPreProcessingConfiguration

The configuration is the TargetSamplingRate , which is the sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

  • TargetSamplingRate (string) --

    The sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

    When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

type ServerSideKmsKeyId

string

param ServerSideKmsKeyId

Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt model data by Amazon Lookout for Equipment.

type Tags

list

param Tags

Any tags associated with the ML model being created.

  • (dict) --

    A tag is a key-value pair that can be added to a resource as metadata.

    • Key (string) -- [REQUIRED]

      The key for the specified tag.

    • Value (string) -- [REQUIRED]

      The value for the specified tag.

rtype

dict

returns

Response Syntax

{
    'ModelArn': 'string',
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
}

Response Structure

  • (dict) --

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the model being created.

    • Status (string) --

      Indicates the status of the CreateModel operation.

DescribeModel (new) Link ¶

Provides overall information about a specific ML model, including model name and ARN, dataset, training and evaluation information, status, and so on.

See also: AWS API Documentation

Request Syntax

client.describe_model(
    ModelName='string'
)
type ModelName

string

param ModelName

[REQUIRED]

The name of the ML model to be described.

rtype

dict

returns

Response Syntax

{
    'ModelName': 'string',
    'ModelArn': 'string',
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'Schema': 'string',
    'LabelsInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    'TrainingDataStartTime': datetime(2015, 1, 1),
    'TrainingDataEndTime': datetime(2015, 1, 1),
    'EvaluationDataStartTime': datetime(2015, 1, 1),
    'EvaluationDataEndTime': datetime(2015, 1, 1),
    'RoleArn': 'string',
    'DataPreProcessingConfiguration': {
        'TargetSamplingRate': 'PT1S'|'PT5S'|'PT10S'|'PT15S'|'PT30S'|'PT1M'|'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
    },
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
    'TrainingExecutionStartTime': datetime(2015, 1, 1),
    'TrainingExecutionEndTime': datetime(2015, 1, 1),
    'FailedReason': 'string',
    'ModelMetrics': 'string',
    'LastUpdatedTime': datetime(2015, 1, 1),
    'CreatedAt': datetime(2015, 1, 1),
    'ServerSideKmsKeyId': 'string'
}

Response Structure

  • (dict) --

    • ModelName (string) --

      The name of the ML model being described.

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model being described.

    • DatasetName (string) --

      The name of the dataset being used by the ML being described.

    • DatasetArn (string) --

      The Amazon Resouce Name (ARN) of the dataset used to create the ML model being described.

    • Schema (string) --

      A JSON description of the data that is in each time series dataset, including names, column names, and data types.

    • LabelsInputConfiguration (dict) --

      Specifies configuration information about the labels input, including its S3 location.

      • S3InputConfiguration (dict) --

        Contains location information for the S3 location being used for label data.

        • Bucket (string) --

          The name of the S3 bucket holding the label data.

        • Prefix (string) --

          The prefix for the S3 bucket used for the label data.

    • TrainingDataStartTime (datetime) --

      Indicates the time reference in the dataset that was used to begin the subset of training data for the ML model.

    • TrainingDataEndTime (datetime) --

      Indicates the time reference in the dataset that was used to end the subset of training data for the ML model.

    • EvaluationDataStartTime (datetime) --

      Indicates the time reference in the dataset that was used to begin the subset of evaluation data for the ML model.

    • EvaluationDataEndTime (datetime) --

      Indicates the time reference in the dataset that was used to end the subset of evaluation data for the ML model.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of a role with permission to access the data source for the ML model being described.

    • DataPreProcessingConfiguration (dict) --

      The configuration is the TargetSamplingRate , which is the sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

      When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

      • TargetSamplingRate (string) --

        The sampling rate of the data after post processing by Amazon Lookout for Equipment. For example, if you provide data that has been collected at a 1 second level and you want the system to resample the data at a 1 minute rate before training, the TargetSamplingRate is 1 minute.

        When providing a value for the TargetSamplingRate , you must attach the prefix "PT" to the rate you want. The value for a 1 second rate is therefore PT1S , the value for a 15 minute rate is PT15M , and the value for a 1 hour rate is PT1H

    • Status (string) --

      Specifies the current status of the model being described. Status describes the status of the most recent action of the model.

    • TrainingExecutionStartTime (datetime) --

      Indicates the time at which the training of the ML model began.

    • TrainingExecutionEndTime (datetime) --

      Indicates the time at which the training of the ML model was completed.

    • FailedReason (string) --

      If the training of the ML model failed, this indicates the reason for that failure.

    • ModelMetrics (string) --

      The Model Metrics show an aggregated summary of the model's performance within the evaluation time range. This is the JSON content of the metrics created when evaluating the model.

    • LastUpdatedTime (datetime) --

      Indicates the last time the ML model was updated. The type of update is not specified.

    • CreatedAt (datetime) --

      Indicates the time and date at which the ML model was created.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt model data by Amazon Lookout for Equipment.

ListInferenceSchedulers (new) Link ¶

Retrieves a list of all inference schedulers currently available for your account.

See also: AWS API Documentation

Request Syntax

client.list_inference_schedulers(
    NextToken='string',
    MaxResults=123,
    InferenceSchedulerNameBeginsWith='string',
    ModelName='string'
)
type NextToken

string

param NextToken

An opaque pagination token indicating where to continue the listing of inference schedulers.

type MaxResults

integer

param MaxResults

Specifies the maximum number of inference schedulers to list.

type InferenceSchedulerNameBeginsWith

string

param InferenceSchedulerNameBeginsWith

The beginning of the name of the inference schedulers to be listed.

type ModelName

string

param ModelName

The name of the ML model used by the inference scheduler to be listed.

rtype

dict

returns

Response Syntax

{
    'NextToken': 'string',
    'InferenceSchedulerSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'InferenceSchedulerName': 'string',
            'InferenceSchedulerArn': 'string',
            'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED',
            'DataDelayOffsetInMinutes': 123,
            'DataUploadFrequency': 'PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of inference schedulers.

    • InferenceSchedulerSummaries (list) --

      Provides information about the specified inference scheduler, including data upload frequency, model name and ARN, and status.

      • (dict) --

        Contains information about the specific inference scheduler, including data delay offset, model name and ARN, status, and so on.

        • ModelName (string) --

          The name of the ML model used for the inference scheduler.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model used by the inference scheduler.

        • InferenceSchedulerName (string) --

          The name of the inference scheduler.

        • InferenceSchedulerArn (string) --

          The Amazon Resource Name (ARN) of the inference scheduler.

        • Status (string) --

          Indicates the status of the inference scheduler.

        • DataDelayOffsetInMinutes (integer) --

          > A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if an offset delay time of five minutes was selected, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

        • DataUploadFrequency (string) --

          How often data is uploaded to the source S3 bucket for the input data. This value is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

CreateInferenceScheduler (new) Link ¶

Creates a scheduled inference. Scheduling an inference is setting up a continuous real-time inference plan to analyze new measurement data. When setting up the schedule, you provide an S3 bucket location for the input data, assign it a delimiter between separate entries in the data, set an offset delay if desired, and set the frequency of inferencing. You must also provide an S3 bucket location for the output data.

See also: AWS API Documentation

Request Syntax

client.create_inference_scheduler(
    ModelName='string',
    InferenceSchedulerName='string',
    DataDelayOffsetInMinutes=123,
    DataUploadFrequency='PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    DataInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    DataOutputConfiguration={
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    RoleArn='string',
    ServerSideKmsKeyId='string',
    ClientToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type ModelName

string

param ModelName

[REQUIRED]

The name of the previously trained ML model being used to create the inference scheduler.

type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler being created.

type DataDelayOffsetInMinutes

integer

param DataDelayOffsetInMinutes

A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

type DataUploadFrequency

string

param DataUploadFrequency

[REQUIRED]

How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

type DataInputConfiguration

dict

param DataInputConfiguration

[REQUIRED]

Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

  • S3InputConfiguration (dict) --

    Specifies configuration information for the input data for the inference, including S3 location of input data..

    • Bucket (string) -- [REQUIRED]

      The bucket containing the input dataset for the inference.

    • Prefix (string) --

      The prefix for the S3 bucket used for the input data for the inference.

  • InputTimeZoneOffset (string) --

    Indicates the difference between your time zone and Greenwich Mean Time (GMT).

  • InferenceInputNameConfiguration (dict) --

    > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

    • TimestampFormat (string) --

      The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

    • ComponentTimestampDelimiter (string) --

      Indicates the delimiter character used between items in the data.

type DataOutputConfiguration

dict

param DataOutputConfiguration

[REQUIRED]

Specifies configuration information for the output results for the inference scheduler, including the S3 location for the output.

  • S3OutputConfiguration (dict) -- [REQUIRED]

    Specifies configuration information for the output results from for the inference, output S3 location.

    • Bucket (string) -- [REQUIRED]

      The bucket containing the output results from the inference

    • Prefix (string) --

      The prefix for the S3 bucket used for the output results from the inference.

  • KmsKeyId (string) --

    The ID number for the AWS KMS key used to encrypt the inference output.

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of a role with permission to access the data source being used for the inference.

type ServerSideKmsKeyId

string

param ServerSideKmsKeyId

Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment.

type ClientToken

string

param ClientToken

[REQUIRED]

A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

This field is autopopulated if not provided.

type Tags

list

param Tags

Any tags associated with the inference scheduler.

  • (dict) --

    A tag is a key-value pair that can be added to a resource as metadata.

    • Key (string) -- [REQUIRED]

      The key for the specified tag.

    • Value (string) -- [REQUIRED]

      The value for the specified tag.

rtype

dict

returns

Response Syntax

{
    'InferenceSchedulerArn': 'string',
    'InferenceSchedulerName': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being created.

    • InferenceSchedulerName (string) --

      The name of inference scheduler being created.

    • Status (string) --

      Indicates the status of the CreateInferenceScheduler operation.

TagResource (new) Link ¶

Associates a given tag to a resource in your account. A tag is a key-value pair which can be added to an Amazon Lookout for Equipment resource as metadata. Tags can be used for organizing your resources as well as helping you to search and filter by tag. Multiple tags can be added to a resource, either when you create it, or later. Up to 50 tags can be associated with each resource.

See also: AWS API Documentation

Request Syntax

client.tag_resource(
    ResourceArn='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type ResourceArn

string

param ResourceArn

[REQUIRED]

The Amazon Resource Name (ARN) of the specific resource to which the tag should be associated.

type Tags

list

param Tags

[REQUIRED]

The tag or tags to be associated with a specific resource. Both the tag key and value are specified.

  • (dict) --

    A tag is a key-value pair that can be added to a resource as metadata.

    • Key (string) -- [REQUIRED]

      The key for the specified tag.

    • Value (string) -- [REQUIRED]

      The value for the specified tag.

rtype

dict

returns

Response Syntax

{}

Response Structure

  • (dict) --

StopInferenceScheduler (new) Link ¶

Stops an inference scheduler.

See also: AWS API Documentation

Request Syntax

client.stop_inference_scheduler(
    InferenceSchedulerName='string'
)
type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler to be stopped.

rtype

dict

returns

Response Syntax

{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model used by the inference scheduler being stopped.

    • ModelName (string) --

      The name of the ML model used by the inference scheduler being stopped.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being stopped.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference schedule being stopped.

    • Status (string) --

      Indicates the status of the inference scheduler.

DescribeDataIngestionJob (new) Link ¶

Provides information on a specific data ingestion job such as creation time, dataset ARN, status, and so on.

See also: AWS API Documentation

Request Syntax

client.describe_data_ingestion_job(
    JobId='string'
)
type JobId

string

param JobId

[REQUIRED]

The job ID of the data ingestion job.

rtype

dict

returns

Response Syntax

{
    'JobId': 'string',
    'DatasetArn': 'string',
    'IngestionInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    },
    'RoleArn': 'string',
    'CreatedAt': datetime(2015, 1, 1),
    'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
    'FailedReason': 'string'
}

Response Structure

  • (dict) --

    • JobId (string) --

      Indicates the job ID of the data ingestion job.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being used in the data ingestion job.

    • IngestionInputConfiguration (dict) --

      Specifies the S3 location configuration for the data input for the data ingestion job.

      • S3InputConfiguration (dict) --

        The location information for the S3 bucket used for input data for the data ingestion.

        • Bucket (string) --

          The name of the S3 bucket used for the input data for the data ingestion.

        • Prefix (string) --

          The prefix for the S3 location being used for the input data for the data ingestion.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of an IAM role with permission to access the data source being ingested.

    • CreatedAt (datetime) --

      The time at which the data ingestion job was created.

    • Status (string) --

      Indicates the status of the DataIngestionJob operation.

    • FailedReason (string) --

      Specifies the reason for failure when a data ingestion job has failed.

ListTagsForResource (new) Link ¶

Lists all the tags for a specified resource, including key and value.

See also: AWS API Documentation

Request Syntax

client.list_tags_for_resource(
    ResourceArn='string'
)
type ResourceArn

string

param ResourceArn

[REQUIRED]

The Amazon Resource Name (ARN) of the resource (such as the dataset or model) that is the focus of the ListTagsForResource operation.

rtype

dict

returns

Response Syntax

{
    'Tags': [
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • Tags (list) --

      Any tags associated with the resource.

      • (dict) --

        A tag is a key-value pair that can be added to a resource as metadata.

        • Key (string) --

          The key for the specified tag.

        • Value (string) --

          The value for the specified tag.

UntagResource (new) Link ¶

Removes a specific tag from a given resource. The tag is specified by its key.

See also: AWS API Documentation

Request Syntax

client.untag_resource(
    ResourceArn='string',
    TagKeys=[
        'string',
    ]
)
type ResourceArn

string

param ResourceArn

[REQUIRED]

The Amazon Resource Name (ARN) of the resource to which the tag is currently associated.

type TagKeys

list

param TagKeys

[REQUIRED]

Specifies the key of the tag to be removed from a specified resource.

  • (string) --

rtype

dict

returns

Response Syntax

{}

Response Structure

  • (dict) --

CreateDataset (new) Link ¶

Creates a container for a collection of data being ingested for analysis. The dataset contains the metadata describing where the data is and what the data actually looks like. In other words, it contains the location of the data source, the data schema, and other information. A dataset also contains any tags associated with the ingested data.

See also: AWS API Documentation

Request Syntax

client.create_dataset(
    DatasetName='string',
    DatasetSchema={
        'InlineDataSchema': 'string'
    },
    ServerSideKmsKeyId='string',
    ClientToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type DatasetName

string

param DatasetName

[REQUIRED]

The name of the dataset being created.

type DatasetSchema

dict

param DatasetSchema

[REQUIRED]

A JSON description of the data that is in each time series dataset, including names, column names, and data types.

  • InlineDataSchema (string) --

type ServerSideKmsKeyId

string

param ServerSideKmsKeyId

Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt dataset data by Amazon Lookout for Equipment.

type ClientToken

string

param ClientToken

[REQUIRED]

A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one.

This field is autopopulated if not provided.

type Tags

list

param Tags

Any tags associated with the ingested data described in the dataset.

  • (dict) --

    A tag is a key-value pair that can be added to a resource as metadata.

    • Key (string) -- [REQUIRED]

      The key for the specified tag.

    • Value (string) -- [REQUIRED]

      The value for the specified tag.

rtype

dict

returns

Response Syntax

{
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE'
}

Response Structure

  • (dict) --

    • DatasetName (string) --

      The name of the dataset being created.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being created.

    • Status (string) --

      Indicates the status of the CreateDataset operation.

DescribeDataset (new) Link ¶

Provides information on a specified dataset such as the schema location, status, and so on.

See also: AWS API Documentation

Request Syntax

client.describe_dataset(
    DatasetName='string'
)
type DatasetName

string

param DatasetName

[REQUIRED]

The name of the dataset to be described.

rtype

dict

returns

Response Syntax

{
    'DatasetName': 'string',
    'DatasetArn': 'string',
    'CreatedAt': datetime(2015, 1, 1),
    'LastUpdatedAt': datetime(2015, 1, 1),
    'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE',
    'Schema': 'string',
    'ServerSideKmsKeyId': 'string',
    'IngestionInputConfiguration': {
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        }
    }
}

Response Structure

  • (dict) --

    • DatasetName (string) --

      The name of the dataset being described.

    • DatasetArn (string) --

      The Amazon Resource Name (ARN) of the dataset being described.

    • CreatedAt (datetime) --

      Specifies the time the dataset was created in Amazon Lookout for Equipment.

    • LastUpdatedAt (datetime) --

      Specifies the time the dataset was last updated, if it was.

    • Status (string) --

      Indicates the status of the dataset.

    • Schema (string) --

      A JSON description of the data that is in each time series dataset, including names, column names, and data types.

    • ServerSideKmsKeyId (string) --

      Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt dataset data by Amazon Lookout for Equipment.

    • IngestionInputConfiguration (dict) --

      Specifies the S3 location configuration for the data input for the data ingestion job.

      • S3InputConfiguration (dict) --

        The location information for the S3 bucket used for input data for the data ingestion.

        • Bucket (string) --

          The name of the S3 bucket used for the input data for the data ingestion.

        • Prefix (string) --

          The prefix for the S3 location being used for the input data for the data ingestion.

ListModels (new) Link ¶

Generates a list of all models in the account, including model name and ARN, dataset, and status.

See also: AWS API Documentation

Request Syntax

client.list_models(
    NextToken='string',
    MaxResults=123,
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED',
    ModelNameBeginsWith='string',
    DatasetNameBeginsWith='string'
)
type NextToken

string

param NextToken

An opaque pagination token indicating where to continue the listing of ML models.

type MaxResults

integer

param MaxResults

Specifies the maximum number of ML models to list.

type Status

string

param Status

The status of the ML model.

type ModelNameBeginsWith

string

param ModelNameBeginsWith

The beginning of the name of the ML models being listed.

type DatasetNameBeginsWith

string

param DatasetNameBeginsWith

The beginning of the name of the dataset of the ML models to be listed.

rtype

dict

returns

Response Syntax

{
    'NextToken': 'string',
    'ModelSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
            'CreatedAt': datetime(2015, 1, 1)
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of ML models.

    • ModelSummaries (list) --

      Provides information on the specified model, including created time, model and dataset ARNs, and status.

      • (dict) --

        Provides information about the specified ML model, including dataset and model names and ARNs, as well as status.

        • ModelName (string) --

          The name of the ML model.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model.

        • DatasetName (string) --

          The name of the dataset being used for the ML model.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the dataset used to create the model.

        • Status (string) --

          Indicates the status of the ML model.

        • CreatedAt (datetime) --

          The time at which the specific model was created.

DeleteModel (new) Link ¶

Deletes an ML model currently available for Amazon Lookout for Equipment. This will prevent it from being used with an inference scheduler, even one that is already set up.

See also: AWS API Documentation

Request Syntax

client.delete_model(
    ModelName='string'
)
type ModelName

string

param ModelName

[REQUIRED]

The name of the ML model to be deleted.

returns

None

ListDatasets (new) Link ¶

Lists all datasets currently available in your account, filtering on the dataset name.

See also: AWS API Documentation

Request Syntax

client.list_datasets(
    NextToken='string',
    MaxResults=123,
    DatasetNameBeginsWith='string'
)
type NextToken

string

param NextToken

An opaque pagination token indicating where to continue the listing of datasets.

type MaxResults

integer

param MaxResults

Specifies the maximum number of datasets to list.

type DatasetNameBeginsWith

string

param DatasetNameBeginsWith

The beginning of the name of the datasets to be listed.

rtype

dict

returns

Response Syntax

{
    'NextToken': 'string',
    'DatasetSummaries': [
        {
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'Status': 'CREATED'|'INGESTION_IN_PROGRESS'|'ACTIVE',
            'CreatedAt': datetime(2015, 1, 1)
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of datasets.

    • DatasetSummaries (list) --

      Provides information about the specified dataset, including creation time, dataset ARN, and status.

      • (dict) --

        Contains information about the specific data set, including name, ARN, and status.

        • DatasetName (string) --

          The name of the dataset.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the specified dataset.

        • Status (string) --

          Indicates the status of the dataset.

        • CreatedAt (datetime) --

          The time at which the dataset was created in Amazon Lookout for Equipment.

ListInferenceExecutions (new) Link ¶

Lists all inference executions that have been performed by the specified inference scheduler.

See also: AWS API Documentation

Request Syntax

client.list_inference_executions(
    NextToken='string',
    MaxResults=123,
    InferenceSchedulerName='string',
    DataStartTimeAfter=datetime(2015, 1, 1),
    DataEndTimeBefore=datetime(2015, 1, 1),
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED'
)
type NextToken

string

param NextToken

An opaque pagination token indicating where to continue the listing of inference executions.

type MaxResults

integer

param MaxResults

Specifies the maximum number of inference executions to list.

type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler for the inference execution listed.

type DataStartTimeAfter

datetime

param DataStartTimeAfter

The time reference in the inferenced dataset after which Amazon Lookout for Equipment started the inference execution.

type DataEndTimeBefore

datetime

param DataEndTimeBefore

The time reference in the inferenced dataset before which Amazon Lookout for Equipment stopped the inference execution.

type Status

string

param Status

The status of the inference execution.

rtype

dict

returns

Response Syntax

{
    'NextToken': 'string',
    'InferenceExecutionSummaries': [
        {
            'ModelName': 'string',
            'ModelArn': 'string',
            'InferenceSchedulerName': 'string',
            'InferenceSchedulerArn': 'string',
            'ScheduledStartTime': datetime(2015, 1, 1),
            'DataStartTime': datetime(2015, 1, 1),
            'DataEndTime': datetime(2015, 1, 1),
            'DataInputConfiguration': {
                'S3InputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                },
                'InputTimeZoneOffset': 'string',
                'InferenceInputNameConfiguration': {
                    'TimestampFormat': 'string',
                    'ComponentTimestampDelimiter': 'string'
                }
            },
            'DataOutputConfiguration': {
                'S3OutputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                },
                'KmsKeyId': 'string'
            },
            'CustomerResultObject': {
                'Bucket': 'string',
                'Key': 'string'
            },
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED',
            'FailedReason': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of inference executions.

    • InferenceExecutionSummaries (list) --

      Provides an array of information about the individual inference executions returned from the ListInferenceExecutions operation, including model used, inference scheduler, data configuration, and so on.

      • (dict) --

        Contains information about the specific inference execution, including input and output data configuration, inference scheduling information, status, and so on.

        • ModelName (string) --

          The name of the ML model being used for the inference execution.

        • ModelArn (string) --

          The Amazon Resource Name (ARN) of the ML model used for the inference execution.

        • InferenceSchedulerName (string) --

          The name of the inference scheduler being used for the inference execution.

        • InferenceSchedulerArn (string) --

          The Amazon Resource Name (ARN) of the inference scheduler being used for the inference execution.

        • ScheduledStartTime (datetime) --

          Indicates the start time at which the inference scheduler began the specific inference execution.

        • DataStartTime (datetime) --

          Indicates the time reference in the dataset at which the inference execution began.

        • DataEndTime (datetime) --

          Indicates the time reference in the dataset at which the inference execution stopped.

        • DataInputConfiguration (dict) --

          Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location.

          • S3InputConfiguration (dict) --

            Specifies configuration information for the input data for the inference, including S3 location of input data..

            • Bucket (string) --

              The bucket containing the input dataset for the inference.

            • Prefix (string) --

              The prefix for the S3 bucket used for the input data for the inference.

          • InputTimeZoneOffset (string) --

            Indicates the difference between your time zone and Greenwich Mean Time (GMT).

          • InferenceInputNameConfiguration (dict) --

            > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

            • TimestampFormat (string) --

              The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

            • ComponentTimestampDelimiter (string) --

              Indicates the delimiter character used between items in the data.

        • DataOutputConfiguration (dict) --

          Specifies configuration information for the output results from for the inference execution, including the output S3 location.

          • S3OutputConfiguration (dict) --

            Specifies configuration information for the output results from for the inference, output S3 location.

            • Bucket (string) --

              The bucket containing the output results from the inference

            • Prefix (string) --

              The prefix for the S3 bucket used for the output results from the inference.

          • KmsKeyId (string) --

            The ID number for the AWS KMS key used to encrypt the inference output.

        • CustomerResultObject (dict) --

          • Bucket (string) --

            The name of the specific S3 bucket.

          • Key (string) --

            The AWS Key Management Service (AWS KMS) key being used to encrypt the S3 object. Without this key, data in the bucket is not accessible.

        • Status (string) --

          Indicates the status of the inference execution.

        • FailedReason (string) --

          Specifies the reason for failure when an inference execution has failed.

DeleteDataset (new) Link ¶

Deletes a dataset and associated artifacts. The operation will check to see if any inference scheduler or data ingestion job is currently using the dataset, and if there isn't, the dataset, its metadata, and any associated data stored in S3 will be deleted. This does not affect any models that used this dataset for training and evaluation, but does prevent it from being used in the future.

See also: AWS API Documentation

Request Syntax

client.delete_dataset(
    DatasetName='string'
)
type DatasetName

string

param DatasetName

[REQUIRED]

The name of the dataset to be deleted.

returns

None

ListDataIngestionJobs (new) Link ¶

Provides a list of all data ingestion jobs, including dataset name and ARN, S3 location of the input data, status, and so on.

See also: AWS API Documentation

Request Syntax

client.list_data_ingestion_jobs(
    DatasetName='string',
    NextToken='string',
    MaxResults=123,
    Status='IN_PROGRESS'|'SUCCESS'|'FAILED'
)
type DatasetName

string

param DatasetName

The name of the dataset being used for the data ingestion job.

type NextToken

string

param NextToken

An opaque pagination token indicating where to continue the listing of data ingestion jobs.

type MaxResults

integer

param MaxResults

Specifies the maximum number of data ingestion jobs to list.

type Status

string

param Status

Indicates the status of the data ingestion job.

rtype

dict

returns

Response Syntax

{
    'NextToken': 'string',
    'DataIngestionJobSummaries': [
        {
            'JobId': 'string',
            'DatasetName': 'string',
            'DatasetArn': 'string',
            'IngestionInputConfiguration': {
                'S3InputConfiguration': {
                    'Bucket': 'string',
                    'Prefix': 'string'
                }
            },
            'Status': 'IN_PROGRESS'|'SUCCESS'|'FAILED'
        },
    ]
}

Response Structure

  • (dict) --

    • NextToken (string) --

      An opaque pagination token indicating where to continue the listing of data ingestion jobs.

    • DataIngestionJobSummaries (list) --

      Specifies information about the specific data ingestion job, including dataset name and status.

      • (dict) --

        Provides information about a specified data ingestion job, including dataset information, data ingestion configuration, and status.

        • JobId (string) --

          Indicates the job ID of the data ingestion job.

        • DatasetName (string) --

          The name of the dataset used for the data ingestion job.

        • DatasetArn (string) --

          The Amazon Resource Name (ARN) of the dataset used in the data ingestion job.

        • IngestionInputConfiguration (dict) --

          Specifies information for the input data for the data inference job, including data S3 location parameters.

          • S3InputConfiguration (dict) --

            The location information for the S3 bucket used for input data for the data ingestion.

            • Bucket (string) --

              The name of the S3 bucket used for the input data for the data ingestion.

            • Prefix (string) --

              The prefix for the S3 location being used for the input data for the data ingestion.

        • Status (string) --

          Indicates the status of the data ingestion job.

DeleteInferenceScheduler (new) Link ¶

Deletes an inference scheduler that has been set up. Already processed output results are not affected.

See also: AWS API Documentation

Request Syntax

client.delete_inference_scheduler(
    InferenceSchedulerName='string'
)
type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler to be deleted.

returns

None

UpdateInferenceScheduler (new) Link ¶

Updates an inference scheduler.

See also: AWS API Documentation

Request Syntax

client.update_inference_scheduler(
    InferenceSchedulerName='string',
    DataDelayOffsetInMinutes=123,
    DataUploadFrequency='PT5M'|'PT10M'|'PT15M'|'PT30M'|'PT1H',
    DataInputConfiguration={
        'S3InputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'InputTimeZoneOffset': 'string',
        'InferenceInputNameConfiguration': {
            'TimestampFormat': 'string',
            'ComponentTimestampDelimiter': 'string'
        }
    },
    DataOutputConfiguration={
        'S3OutputConfiguration': {
            'Bucket': 'string',
            'Prefix': 'string'
        },
        'KmsKeyId': 'string'
    },
    RoleArn='string'
)
type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler to be updated.

type DataDelayOffsetInMinutes

integer

param DataDelayOffsetInMinutes

> A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data.

type DataUploadFrequency

string

param DataUploadFrequency

How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes.

type DataInputConfiguration

dict

param DataInputConfiguration

Specifies information for the input data for the inference scheduler, including delimiter, format, and dataset location.

  • S3InputConfiguration (dict) --

    Specifies configuration information for the input data for the inference, including S3 location of input data..

    • Bucket (string) -- [REQUIRED]

      The bucket containing the input dataset for the inference.

    • Prefix (string) --

      The prefix for the S3 bucket used for the input data for the inference.

  • InputTimeZoneOffset (string) --

    Indicates the difference between your time zone and Greenwich Mean Time (GMT).

  • InferenceInputNameConfiguration (dict) --

    > Specifies configuration information for the input data for the inference, including timestamp format and delimiter.

    • TimestampFormat (string) --

      The format of the timestamp, whether Epoch time, or standard, with or without hyphens (-).

    • ComponentTimestampDelimiter (string) --

      Indicates the delimiter character used between items in the data.

type DataOutputConfiguration

dict

param DataOutputConfiguration

Specifies information for the output results from the inference scheduler, including the output S3 location.

  • S3OutputConfiguration (dict) -- [REQUIRED]

    Specifies configuration information for the output results from for the inference, output S3 location.

    • Bucket (string) -- [REQUIRED]

      The bucket containing the output results from the inference

    • Prefix (string) --

      The prefix for the S3 bucket used for the output results from the inference.

  • KmsKeyId (string) --

    The ID number for the AWS KMS key used to encrypt the inference output.

type RoleArn

string

param RoleArn

The Amazon Resource Name (ARN) of a role with permission to access the data source for the inference scheduler.

returns

None

StartInferenceScheduler (new) Link ¶

Starts an inference scheduler.

See also: AWS API Documentation

Request Syntax

client.start_inference_scheduler(
    InferenceSchedulerName='string'
)
type InferenceSchedulerName

string

param InferenceSchedulerName

[REQUIRED]

The name of the inference scheduler to be started.

rtype

dict

returns

Response Syntax

{
    'ModelArn': 'string',
    'ModelName': 'string',
    'InferenceSchedulerName': 'string',
    'InferenceSchedulerArn': 'string',
    'Status': 'PENDING'|'RUNNING'|'STOPPING'|'STOPPED'
}

Response Structure

  • (dict) --

    • ModelArn (string) --

      The Amazon Resource Name (ARN) of the ML model being used by the inference scheduler.

    • ModelName (string) --

      The name of the ML model being used by the inference scheduler.

    • InferenceSchedulerName (string) --

      The name of the inference scheduler being started.

    • InferenceSchedulerArn (string) --

      The Amazon Resource Name (ARN) of the inference scheduler being started.

    • Status (string) --

      Indicates the status of the inference scheduler.