Amazon Personalize

2019/11/14 - Amazon Personalize - 3 new api methods

Changes  Update personalize client to latest version

DescribeBatchInferenceJob (new) Link ¶

Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations.

See also: AWS API Documentation

Request Syntax

client.describe_batch_inference_job(
    batchInferenceJobArn='string'
)
type batchInferenceJobArn:

string

param batchInferenceJobArn:

[REQUIRED]

The ARN of the batch inference job to describe.

rtype:

dict

returns:

Response Syntax

{
    'batchInferenceJob': {
        'jobName': 'string',
        'batchInferenceJobArn': 'string',
        'failureReason': 'string',
        'solutionVersionArn': 'string',
        'numResults': 123,
        'jobInput': {
            's3DataSource': {
                'path': 'string',
                'kmsKeyArn': 'string'
            }
        },
        'jobOutput': {
            's3DataDestination': {
                'path': 'string',
                'kmsKeyArn': 'string'
            }
        },
        'roleArn': 'string',
        'status': 'string',
        'creationDateTime': datetime(2015, 1, 1),
        'lastUpdatedDateTime': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • batchInferenceJob (dict) --

      Information on the specified batch inference job.

      • jobName (string) --

        The name of the batch inference job.

      • batchInferenceJobArn (string) --

        The Amazon Resource Name (ARN) of the batch inference job.

      • failureReason (string) --

        If the batch inference job failed, the reason for the failure.

      • solutionVersionArn (string) --

        The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created.

      • numResults (integer) --

        The number of recommendations generated by the batch inference job. This number includes the error messages generated for failed input records.

      • jobInput (dict) --

        The Amazon S3 path that leads to the input data used to generate the batch inference job.

        • s3DataSource (dict) --

          The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling.

          • path (string) --

            The file path of the Amazon S3 bucket.

          • kmsKeyArn (string) --

            The Amazon Resource Name (ARN) of the Amazon Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.

      • jobOutput (dict) --

        The Amazon S3 bucket that contains the output data generated by the batch inference job.

        • s3DataDestination (dict) --

          Information on the Amazon S3 bucket in which the batch inference job's output is stored.

          • path (string) --

            The file path of the Amazon S3 bucket.

          • kmsKeyArn (string) --

            The Amazon Resource Name (ARN) of the Amazon Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.

      • roleArn (string) --

        The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch inference job.

      • status (string) --

        The status of the batch inference job. The status is one of the following values:

        • PENDING

        • IN PROGRESS

        • ACTIVE

        • CREATE FAILED

      • creationDateTime (datetime) --

        The time at which the batch inference job was created.

      • lastUpdatedDateTime (datetime) --

        The time at which the batch inference job was last updated.

CreateBatchInferenceJob (new) Link ¶

Creates a batch inference job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see recommendations-batch.

See also: AWS API Documentation

Request Syntax

client.create_batch_inference_job(
    jobName='string',
    solutionVersionArn='string',
    numResults=123,
    jobInput={
        's3DataSource': {
            'path': 'string',
            'kmsKeyArn': 'string'
        }
    },
    jobOutput={
        's3DataDestination': {
            'path': 'string',
            'kmsKeyArn': 'string'
        }
    },
    roleArn='string'
)
type jobName:

string

param jobName:

[REQUIRED]

The name of the batch inference job to create.

type solutionVersionArn:

string

param solutionVersionArn:

[REQUIRED]

The Amazon Resource Name (ARN) of the solution version that will be used to generate the batch inference recommendations.

type numResults:

integer

param numResults:

The number of recommendations to retreive.

type jobInput:

dict

param jobInput:

[REQUIRED]

The Amazon S3 path that leads to the input file to base your recommendations on. The input material must be in JSON format.

  • s3DataSource (dict) -- [REQUIRED]

    The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling.

    • path (string) -- [REQUIRED]

      The file path of the Amazon S3 bucket.

    • kmsKeyArn (string) --

      The Amazon Resource Name (ARN) of the Amazon Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.

type jobOutput:

dict

param jobOutput:

[REQUIRED]

The path to the Amazon S3 bucket where the job's output will be stored.

  • s3DataDestination (dict) -- [REQUIRED]

    Information on the Amazon S3 bucket in which the batch inference job's output is stored.

    • path (string) -- [REQUIRED]

      The file path of the Amazon S3 bucket.

    • kmsKeyArn (string) --

      The Amazon Resource Name (ARN) of the Amazon Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.

type roleArn:

string

param roleArn:

[REQUIRED]

The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and out Amazon S3 buckets respectively.

rtype:

dict

returns:

Response Syntax

{
    'batchInferenceJobArn': 'string'
}

Response Structure

  • (dict) --

    • batchInferenceJobArn (string) --

      The ARN of the batch inference job.

ListBatchInferenceJobs (new) Link ¶

Gets a list of the batch inference jobs that have been performed off of a solution version.

See also: AWS API Documentation

Request Syntax

client.list_batch_inference_jobs(
    solutionVersionArn='string',
    nextToken='string',
    maxResults=123
)
type solutionVersionArn:

string

param solutionVersionArn:

The Amazon Resource Name (ARN) of the solution version from which the batch inference jobs were created.

type nextToken:

string

param nextToken:

The token to request the next page of results.

type maxResults:

integer

param maxResults:

The maximum number of batch inference job results to return in each page. The default value is 100.

rtype:

dict

returns:

Response Syntax

{
    'batchInferenceJobs': [
        {
            'batchInferenceJobArn': 'string',
            'jobName': 'string',
            'status': 'string',
            'creationDateTime': datetime(2015, 1, 1),
            'lastUpdatedDateTime': datetime(2015, 1, 1),
            'failureReason': 'string'
        },
    ],
    'nextToken': 'string'
}

Response Structure

  • (dict) --

    • batchInferenceJobs (list) --

      A list containing information on each job that is returned.

      • (dict) --

        A truncated version of the BatchInferenceJob datatype. The ListBatchInferenceJobs operation returns a list of batch inference job summaries.

        • batchInferenceJobArn (string) --

          The Amazon Resource Name (ARN) of the batch inference job.

        • jobName (string) --

          The name of the batch inference job.

        • status (string) --

          The status of the batch inference job. The status is one of the following values:

          • PENDING

          • IN PROGRESS

          • ACTIVE

          • CREATE FAILED

        • creationDateTime (datetime) --

          The time at which the batch inference job was created.

        • lastUpdatedDateTime (datetime) --

          The time at which the batch inference job was last updated.

        • failureReason (string) --

          If the batch inference job failed, the reason for the failure.

    • nextToken (string) --

      The token to use to retreive the next page of results. The value is null when there are no more results to return.