Amazon Personalize

2024/05/02 - Amazon Personalize - 3 new api methods

Changes  This releases ability to delete users and their data, including their metadata and interactions data, from a dataset group.

DescribeDataDeletionJob (new) Link ¶

Describes the data deletion job created by CreateDataDeletionJob, including the job status.

See also: AWS API Documentation

Request Syntax

client.describe_data_deletion_job(
    dataDeletionJobArn='string'
)
type dataDeletionJobArn

string

param dataDeletionJobArn

[REQUIRED]

The Amazon Resource Name (ARN) of the data deletion job.

rtype

dict

returns

Response Syntax

{
    'dataDeletionJob': {
        'jobName': 'string',
        'dataDeletionJobArn': 'string',
        'datasetGroupArn': 'string',
        'dataSource': {
            'dataLocation': 'string'
        },
        'roleArn': 'string',
        'status': 'string',
        'numDeleted': 123,
        'creationDateTime': datetime(2015, 1, 1),
        'lastUpdatedDateTime': datetime(2015, 1, 1),
        'failureReason': 'string'
    }
}

Response Structure

  • (dict) --

    • dataDeletionJob (dict) --

      Information about the data deletion job, including the status.

      The status is one of the following values:

      • PENDING

      • IN_PROGRESS

      • COMPLETED

      • FAILED

      • jobName (string) --

        The name of the data deletion job.

      • dataDeletionJobArn (string) --

        The Amazon Resource Name (ARN) of the data deletion job.

      • datasetGroupArn (string) --

        The Amazon Resource Name (ARN) of the dataset group the job deletes records from.

      • dataSource (dict) --

        Describes the data source that contains the data to upload to a dataset, or the list of records to delete from Amazon Personalize.

        • dataLocation (string) --

          For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.

          For example:

          s3://bucket-name/folder-name/fileName.csv

          If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a / after the folder name:

          s3://bucket-name/folder-name/

      • roleArn (string) --

        The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.

      • status (string) --

        The status of the data deletion job.

        A data deletion job can have one of the following statuses:

        • PENDING > IN_PROGRESS > COMPLETED -or- FAILED

      • numDeleted (integer) --

        The number of records deleted by a COMPLETED job.

      • creationDateTime (datetime) --

        The creation date and time (in Unix time) of the data deletion job.

      • lastUpdatedDateTime (datetime) --

        The date and time (in Unix time) the data deletion job was last updated.

      • failureReason (string) --

        If a data deletion job fails, provides the reason why.

CreateDataDeletionJob (new) Link ¶

Creates a batch job that deletes all references to specific users from an Amazon Personalize dataset group in batches. You specify the users to delete in a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon Personalize no longer trains on the users’ data and no longer considers the users when generating user segments. For more information about creating a data deletion job, see Deleting users.

  • Your input file must be a CSV file with a single USER_ID column that lists the users IDs. For more information about preparing the CSV file, see Preparing your data deletion file and uploading it to Amazon S3.

  • To give Amazon Personalize permission to access your input CSV file of userIds, you must specify an IAM service role that has permission to read from the data source. This role needs GetObject and ListBucket permissions for the bucket and its content. These permissions are the same as importing data. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.

After you create a job, it can take up to a day to delete all references to the users from datasets and models. Until the job completes, Amazon Personalize continues to use the data when training. And if you use a User Segmentation recipe, the users might appear in user segments.

Status

A data deletion job can have one of the following statuses:

  • PENDING > IN_PROGRESS > COMPLETED -or- FAILED

To get the status of the data deletion job, call DescribeDataDeletionJob API operation and specify the Amazon Resource Name (ARN) of the job. If the status is FAILED, the response includes a failureReason key, which describes why the job failed.

Related APIs

See also: AWS API Documentation

Request Syntax

client.create_data_deletion_job(
    jobName='string',
    datasetGroupArn='string',
    dataSource={
        'dataLocation': 'string'
    },
    roleArn='string',
    tags=[
        {
            'tagKey': 'string',
            'tagValue': 'string'
        },
    ]
)
type jobName

string

param jobName

[REQUIRED]

The name for the data deletion job.

type datasetGroupArn

string

param datasetGroupArn

[REQUIRED]

The Amazon Resource Name (ARN) of the dataset group that has the datasets you want to delete records from.

type dataSource

dict

param dataSource

[REQUIRED]

The Amazon S3 bucket that contains the list of userIds of the users to delete.

  • dataLocation (string) --

    For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.

    For example:

    s3://bucket-name/folder-name/fileName.csv

    If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a / after the folder name:

    s3://bucket-name/folder-name/

type roleArn

string

param roleArn

[REQUIRED]

The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.

type tags

list

param tags

A list of tags to apply to the data deletion job.

  • (dict) --

    The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize resources.

    • tagKey (string) -- [REQUIRED]

      One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values.

    • tagValue (string) -- [REQUIRED]

      The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key).

rtype

dict

returns

Response Syntax

{
    'dataDeletionJobArn': 'string'
}

Response Structure

  • (dict) --

    • dataDeletionJobArn (string) --

      The Amazon Resource Name (ARN) of the data deletion job.

ListDataDeletionJobs (new) Link ¶

Returns a list of data deletion jobs for a dataset group ordered by creation time, with the most recent first. When a dataset group is not specified, all the data deletion jobs associated with the account are listed. The response provides the properties for each job, including the Amazon Resource Name (ARN). For more information on data deletion jobs, see Deleting users.

See also: AWS API Documentation

Request Syntax

client.list_data_deletion_jobs(
    datasetGroupArn='string',
    nextToken='string',
    maxResults=123
)
type datasetGroupArn

string

param datasetGroupArn

The Amazon Resource Name (ARN) of the dataset group to list data deletion jobs for.

type nextToken

string

param nextToken

A token returned from the previous call to ListDataDeletionJobs for getting the next set of jobs (if they exist).

type maxResults

integer

param maxResults

The maximum number of data deletion jobs to return.

rtype

dict

returns

Response Syntax

{
    'dataDeletionJobs': [
        {
            'dataDeletionJobArn': 'string',
            'datasetGroupArn': 'string',
            'jobName': 'string',
            'status': 'string',
            'creationDateTime': datetime(2015, 1, 1),
            'lastUpdatedDateTime': datetime(2015, 1, 1),
            'failureReason': 'string'
        },
    ],
    'nextToken': 'string'
}

Response Structure

  • (dict) --

    • dataDeletionJobs (list) --

      The list of data deletion jobs.

      • (dict) --

        Provides a summary of the properties of a data deletion job. For a complete listing, call the DescribeDataDeletionJob API operation.

        • dataDeletionJobArn (string) --

          The Amazon Resource Name (ARN) of the data deletion job.

        • datasetGroupArn (string) --

          The Amazon Resource Name (ARN) of the dataset group the job deleted records from.

        • jobName (string) --

          The name of the data deletion job.

        • status (string) --

          The status of the data deletion job.

          A data deletion job can have one of the following statuses:

          • PENDING > IN_PROGRESS > COMPLETED -or- FAILED

        • creationDateTime (datetime) --

          The creation date and time (in Unix time) of the data deletion job.

        • lastUpdatedDateTime (datetime) --

          The date and time (in Unix time) the data deletion job was last updated.

        • failureReason (string) --

          If a data deletion job fails, provides the reason why.

    • nextToken (string) --

      A token for getting the next set of data deletion jobs (if they exist).