AWS Database Migration Service

2020/07/27 - AWS Database Migration Service - 6 new4 updated api methods

Changes  Update dms client to latest version

DescribeApplicableIndividualAssessments (new) Link ¶

Provides a list of individual assessments that you can specify for a new premigration assessment run, given one or more parameters.

If you specify an existing migration task, this operation provides the default individual assessments you can specify for that task. Otherwise, the specified parameters model elements of a possible migration task on which to base a premigration assessment run.

To use these migration task modeling parameters, you must specify an existing replication instance, a source database engine, a target database engine, and a migration type. This combination of parameters potentially limits the default individual assessments available for an assessment run created for a corresponding migration task.

If you specify no parameters, this operation provides a list of all possible individual assessments that you can specify for an assessment run. If you specify any one of the task modeling parameters, you must specify all of them or the operation cannot provide a list of individual assessments. The only parameter that you can specify alone is for an existing migration task. The specified task definition then determines the default list of individual assessments that you can specify in an assessment run for the task.

See also: AWS API Documentation

Request Syntax

client.describe_applicable_individual_assessments(
    ReplicationTaskArn='string',
    ReplicationInstanceArn='string',
    SourceEngineName='string',
    TargetEngineName='string',
    MigrationType='full-load'|'cdc'|'full-load-and-cdc',
    MaxRecords=123,
    Marker='string'
)
type ReplicationTaskArn:

string

param ReplicationTaskArn:

Amazon Resource Name (ARN) of a migration task on which you want to base the default list of individual assessments.

type ReplicationInstanceArn:

string

param ReplicationInstanceArn:

ARN of a replication instance on which you want to base the default list of individual assessments.

type SourceEngineName:

string

param SourceEngineName:

Name of a database engine that the specified replication instance supports as a source.

type TargetEngineName:

string

param TargetEngineName:

Name of a database engine that the specified replication instance supports as a target.

type MigrationType:

string

param MigrationType:

Name of the migration type that each provided individual assessment must support.

type MaxRecords:

integer

param MaxRecords:

Maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

type Marker:

string

param Marker:

Optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

rtype:

dict

returns:

Response Syntax

{
    'IndividualAssessmentNames': [
        'string',
    ],
    'Marker': 'string'
}

Response Structure

  • (dict) --

    • IndividualAssessmentNames (list) --

      List of names for the individual assessments supported by the premigration assessment run that you start based on the specified request parameters. For more information on the available individual assessments, including compatibility with different migration task configurations, see Working with premigration assessment runs in the AWS Database Migration Service User Guide.

      • (string) --

    • Marker (string) --

      Pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by MaxRecords.

StartReplicationTaskAssessmentRun (new) Link ¶

Starts a new premigration assessment run for one or more individual assessments of a migration task.

The assessments that you can specify depend on the source and target database engine and the migration type defined for the given task. To run this operation, your migration task must already be created. After you run this operation, you can review the status of each individual assessment. You can also run the migration task manually after the assessment run and its individual assessments complete.

See also: AWS API Documentation

Request Syntax

client.start_replication_task_assessment_run(
    ReplicationTaskArn='string',
    ServiceAccessRoleArn='string',
    ResultLocationBucket='string',
    ResultLocationFolder='string',
    ResultEncryptionMode='string',
    ResultKmsKeyArn='string',
    AssessmentRunName='string',
    IncludeOnly=[
        'string',
    ],
    Exclude=[
        'string',
    ]
)
type ReplicationTaskArn:

string

param ReplicationTaskArn:

[REQUIRED]

Amazon Resource Name (ARN) of the migration task associated with the premigration assessment run that you want to start.

type ServiceAccessRoleArn:

string

param ServiceAccessRoleArn:

[REQUIRED]

ARN of a service role needed to start the assessment run.

type ResultLocationBucket:

string

param ResultLocationBucket:

[REQUIRED]

Amazon S3 bucket where you want AWS DMS to store the results of this assessment run.

type ResultLocationFolder:

string

param ResultLocationFolder:

Folder within an Amazon S3 bucket where you want AWS DMS to store the results of this assessment run.

type ResultEncryptionMode:

string

param ResultEncryptionMode:

Encryption mode that you can specify to encrypt the results of this assessment run. If you don't specify this request parameter, AWS DMS stores the assessment run results without encryption. You can specify one of the options following:

  • "SSE_S3" – The server-side encryption provided as a default by Amazon S3.

  • "SSE_KMS" – AWS Key Management Service (AWS KMS) encryption. This encryption can use either a custom KMS encryption key that you specify or the default KMS encryption key that DMS provides.

type ResultKmsKeyArn:

string

param ResultKmsKeyArn:

ARN of a custom KMS encryption key that you specify when you set ResultEncryptionMode to "SSE_KMS".

type AssessmentRunName:

string

param AssessmentRunName:

[REQUIRED]

Unique name to identify the assessment run.

type IncludeOnly:

list

param IncludeOnly:

Space-separated list of names for specific individual assessments that you want to include. These names come from the default list of individual assessments that AWS DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.

  • (string) --

type Exclude:

list

param Exclude:

Space-separated list of names for specific individual assessments that you want to exclude. These names come from the default list of individual assessments that AWS DMS supports for the associated migration task. This task is specified by ReplicationTaskArn.

  • (string) --

rtype:

dict

returns:

Response Syntax

{
    'ReplicationTaskAssessmentRun': {
        'ReplicationTaskAssessmentRunArn': 'string',
        'ReplicationTaskArn': 'string',
        'Status': 'string',
        'ReplicationTaskAssessmentRunCreationDate': datetime(2015, 1, 1),
        'AssessmentProgress': {
            'IndividualAssessmentCount': 123,
            'IndividualAssessmentCompletedCount': 123
        },
        'LastFailureMessage': 'string',
        'ServiceAccessRoleArn': 'string',
        'ResultLocationBucket': 'string',
        'ResultLocationFolder': 'string',
        'ResultEncryptionMode': 'string',
        'ResultKmsKeyArn': 'string',
        'AssessmentRunName': 'string'
    }
}

Response Structure

  • (dict) --

    • ReplicationTaskAssessmentRun (dict) --

      The premigration assessment run that was started.

      • ReplicationTaskAssessmentRunArn (string) --

        Amazon Resource Name (ARN) of this assessment run.

      • ReplicationTaskArn (string) --

        ARN of the migration task associated with this premigration assessment run.

      • Status (string) --

        Assessment run status.

        This status can have one of the following values:

        • "cancelling" – The assessment run was canceled by the CancelReplicationTaskAssessmentRun operation.

        • "deleting" – The assessment run was deleted by the DeleteReplicationTaskAssessmentRun operation.

        • "failed" – At least one individual assessment completed with a failed status.

        • "error-provisioning" – An internal error occurred while resources were provisioned (during provisioning status).

        • "error-executing" – An internal error occurred while individual assessments ran (during running status).

        • "invalid state" – The assessment run is in an unknown state.

        • "passed" – All individual assessments have completed, and none has a failed status.

        • "provisioning" – Resources required to run individual assessments are being provisioned.

        • "running" – Individual assessments are being run.

        • "starting" – The assessment run is starting, but resources are not yet being provisioned for individual assessments.

      • ReplicationTaskAssessmentRunCreationDate (datetime) --

        Date on which the assessment run was created using the StartReplicationTaskAssessmentRun operation.

      • AssessmentProgress (dict) --

        Indication of the completion progress for the individual assessments specified to run.

        • IndividualAssessmentCount (integer) --

          The number of individual assessments that are specified to run.

        • IndividualAssessmentCompletedCount (integer) --

          The number of individual assessments that have completed, successfully or not.

      • LastFailureMessage (string) --

        Last message generated by an individual assessment failure.

      • ServiceAccessRoleArn (string) --

        ARN of the service role used to start the assessment run using the StartReplicationTaskAssessmentRun operation.

      • ResultLocationBucket (string) --

        Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultLocationFolder (string) --

        Folder in an Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultEncryptionMode (string) --

        Encryption mode used to encrypt the assessment run results.

      • ResultKmsKeyArn (string) --

        ARN of the AWS KMS encryption key used to encrypt the assessment run results.

      • AssessmentRunName (string) --

        Unique name of the assessment run.

DescribeReplicationTaskAssessmentRuns (new) Link ¶

Returns a paginated list of premigration assessment runs based on filter settings.

These filter settings can specify a combination of premigration assessment runs, migration tasks, replication instances, and assessment run status values.

See also: AWS API Documentation

Request Syntax

client.describe_replication_task_assessment_runs(
    Filters=[
        {
            'Name': 'string',
            'Values': [
                'string',
            ]
        },
    ],
    MaxRecords=123,
    Marker='string'
)
type Filters:

list

param Filters:

Filters applied to the premigration assessment runs described in the form of key-value pairs.

Valid filter names: replication-task-assessment-run-arn, replication-task-arn, replication-instance-arn, status

  • (dict) --

    Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe* or similar operation.

    • Name (string) -- [REQUIRED]

      The name of the filter as specified for a Describe* or similar operation.

    • Values (list) -- [REQUIRED]

      The filter value, which can specify one or more values used to narrow the returned results.

      • (string) --

type MaxRecords:

integer

param MaxRecords:

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

type Marker:

string

param Marker:

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

rtype:

dict

returns:

Response Syntax

{
    'Marker': 'string',
    'ReplicationTaskAssessmentRuns': [
        {
            'ReplicationTaskAssessmentRunArn': 'string',
            'ReplicationTaskArn': 'string',
            'Status': 'string',
            'ReplicationTaskAssessmentRunCreationDate': datetime(2015, 1, 1),
            'AssessmentProgress': {
                'IndividualAssessmentCount': 123,
                'IndividualAssessmentCompletedCount': 123
            },
            'LastFailureMessage': 'string',
            'ServiceAccessRoleArn': 'string',
            'ResultLocationBucket': 'string',
            'ResultLocationFolder': 'string',
            'ResultEncryptionMode': 'string',
            'ResultKmsKeyArn': 'string',
            'AssessmentRunName': 'string'
        },
    ]
}

Response Structure

  • (dict) --

    • Marker (string) --

      A pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by MaxRecords.

    • ReplicationTaskAssessmentRuns (list) --

      One or more premigration assessment runs as specified by Filters.

      • (dict) --

        Provides information that describes a premigration assessment run that you have started using the StartReplicationTaskAssessmentRun operation.

        Some of the information appears based on other operations that can return the ReplicationTaskAssessmentRun object.

        • ReplicationTaskAssessmentRunArn (string) --

          Amazon Resource Name (ARN) of this assessment run.

        • ReplicationTaskArn (string) --

          ARN of the migration task associated with this premigration assessment run.

        • Status (string) --

          Assessment run status.

          This status can have one of the following values:

          • "cancelling" – The assessment run was canceled by the CancelReplicationTaskAssessmentRun operation.

          • "deleting" – The assessment run was deleted by the DeleteReplicationTaskAssessmentRun operation.

          • "failed" – At least one individual assessment completed with a failed status.

          • "error-provisioning" – An internal error occurred while resources were provisioned (during provisioning status).

          • "error-executing" – An internal error occurred while individual assessments ran (during running status).

          • "invalid state" – The assessment run is in an unknown state.

          • "passed" – All individual assessments have completed, and none has a failed status.

          • "provisioning" – Resources required to run individual assessments are being provisioned.

          • "running" – Individual assessments are being run.

          • "starting" – The assessment run is starting, but resources are not yet being provisioned for individual assessments.

        • ReplicationTaskAssessmentRunCreationDate (datetime) --

          Date on which the assessment run was created using the StartReplicationTaskAssessmentRun operation.

        • AssessmentProgress (dict) --

          Indication of the completion progress for the individual assessments specified to run.

          • IndividualAssessmentCount (integer) --

            The number of individual assessments that are specified to run.

          • IndividualAssessmentCompletedCount (integer) --

            The number of individual assessments that have completed, successfully or not.

        • LastFailureMessage (string) --

          Last message generated by an individual assessment failure.

        • ServiceAccessRoleArn (string) --

          ARN of the service role used to start the assessment run using the StartReplicationTaskAssessmentRun operation.

        • ResultLocationBucket (string) --

          Amazon S3 bucket where AWS DMS stores the results of this assessment run.

        • ResultLocationFolder (string) --

          Folder in an Amazon S3 bucket where AWS DMS stores the results of this assessment run.

        • ResultEncryptionMode (string) --

          Encryption mode used to encrypt the assessment run results.

        • ResultKmsKeyArn (string) --

          ARN of the AWS KMS encryption key used to encrypt the assessment run results.

        • AssessmentRunName (string) --

          Unique name of the assessment run.

CancelReplicationTaskAssessmentRun (new) Link ¶

Cancels a single premigration assessment run.

This operation prevents any individual assessments from running if they haven't started running. It also attempts to cancel any individual assessments that are currently running.

See also: AWS API Documentation

Request Syntax

client.cancel_replication_task_assessment_run(
    ReplicationTaskAssessmentRunArn='string'
)
type ReplicationTaskAssessmentRunArn:

string

param ReplicationTaskAssessmentRunArn:

[REQUIRED]

Amazon Resource Name (ARN) of the premigration assessment run to be canceled.

rtype:

dict

returns:

Response Syntax

{
    'ReplicationTaskAssessmentRun': {
        'ReplicationTaskAssessmentRunArn': 'string',
        'ReplicationTaskArn': 'string',
        'Status': 'string',
        'ReplicationTaskAssessmentRunCreationDate': datetime(2015, 1, 1),
        'AssessmentProgress': {
            'IndividualAssessmentCount': 123,
            'IndividualAssessmentCompletedCount': 123
        },
        'LastFailureMessage': 'string',
        'ServiceAccessRoleArn': 'string',
        'ResultLocationBucket': 'string',
        'ResultLocationFolder': 'string',
        'ResultEncryptionMode': 'string',
        'ResultKmsKeyArn': 'string',
        'AssessmentRunName': 'string'
    }
}

Response Structure

  • (dict) --

    • ReplicationTaskAssessmentRun (dict) --

      The ReplicationTaskAssessmentRun object for the canceled assessment run.

      • ReplicationTaskAssessmentRunArn (string) --

        Amazon Resource Name (ARN) of this assessment run.

      • ReplicationTaskArn (string) --

        ARN of the migration task associated with this premigration assessment run.

      • Status (string) --

        Assessment run status.

        This status can have one of the following values:

        • "cancelling" – The assessment run was canceled by the CancelReplicationTaskAssessmentRun operation.

        • "deleting" – The assessment run was deleted by the DeleteReplicationTaskAssessmentRun operation.

        • "failed" – At least one individual assessment completed with a failed status.

        • "error-provisioning" – An internal error occurred while resources were provisioned (during provisioning status).

        • "error-executing" – An internal error occurred while individual assessments ran (during running status).

        • "invalid state" – The assessment run is in an unknown state.

        • "passed" – All individual assessments have completed, and none has a failed status.

        • "provisioning" – Resources required to run individual assessments are being provisioned.

        • "running" – Individual assessments are being run.

        • "starting" – The assessment run is starting, but resources are not yet being provisioned for individual assessments.

      • ReplicationTaskAssessmentRunCreationDate (datetime) --

        Date on which the assessment run was created using the StartReplicationTaskAssessmentRun operation.

      • AssessmentProgress (dict) --

        Indication of the completion progress for the individual assessments specified to run.

        • IndividualAssessmentCount (integer) --

          The number of individual assessments that are specified to run.

        • IndividualAssessmentCompletedCount (integer) --

          The number of individual assessments that have completed, successfully or not.

      • LastFailureMessage (string) --

        Last message generated by an individual assessment failure.

      • ServiceAccessRoleArn (string) --

        ARN of the service role used to start the assessment run using the StartReplicationTaskAssessmentRun operation.

      • ResultLocationBucket (string) --

        Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultLocationFolder (string) --

        Folder in an Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultEncryptionMode (string) --

        Encryption mode used to encrypt the assessment run results.

      • ResultKmsKeyArn (string) --

        ARN of the AWS KMS encryption key used to encrypt the assessment run results.

      • AssessmentRunName (string) --

        Unique name of the assessment run.

DeleteReplicationTaskAssessmentRun (new) Link ¶

Deletes the record of a single premigration assessment run.

This operation removes all metadata that AWS DMS maintains about this assessment run. However, the operation leaves untouched all information about this assessment run that is stored in your Amazon S3 bucket.

See also: AWS API Documentation

Request Syntax

client.delete_replication_task_assessment_run(
    ReplicationTaskAssessmentRunArn='string'
)
type ReplicationTaskAssessmentRunArn:

string

param ReplicationTaskAssessmentRunArn:

[REQUIRED]

Amazon Resource Name (ARN) of the premigration assessment run to be deleted.

rtype:

dict

returns:

Response Syntax

{
    'ReplicationTaskAssessmentRun': {
        'ReplicationTaskAssessmentRunArn': 'string',
        'ReplicationTaskArn': 'string',
        'Status': 'string',
        'ReplicationTaskAssessmentRunCreationDate': datetime(2015, 1, 1),
        'AssessmentProgress': {
            'IndividualAssessmentCount': 123,
            'IndividualAssessmentCompletedCount': 123
        },
        'LastFailureMessage': 'string',
        'ServiceAccessRoleArn': 'string',
        'ResultLocationBucket': 'string',
        'ResultLocationFolder': 'string',
        'ResultEncryptionMode': 'string',
        'ResultKmsKeyArn': 'string',
        'AssessmentRunName': 'string'
    }
}

Response Structure

  • (dict) --

    • ReplicationTaskAssessmentRun (dict) --

      The ReplicationTaskAssessmentRun object for the deleted assessment run.

      • ReplicationTaskAssessmentRunArn (string) --

        Amazon Resource Name (ARN) of this assessment run.

      • ReplicationTaskArn (string) --

        ARN of the migration task associated with this premigration assessment run.

      • Status (string) --

        Assessment run status.

        This status can have one of the following values:

        • "cancelling" – The assessment run was canceled by the CancelReplicationTaskAssessmentRun operation.

        • "deleting" – The assessment run was deleted by the DeleteReplicationTaskAssessmentRun operation.

        • "failed" – At least one individual assessment completed with a failed status.

        • "error-provisioning" – An internal error occurred while resources were provisioned (during provisioning status).

        • "error-executing" – An internal error occurred while individual assessments ran (during running status).

        • "invalid state" – The assessment run is in an unknown state.

        • "passed" – All individual assessments have completed, and none has a failed status.

        • "provisioning" – Resources required to run individual assessments are being provisioned.

        • "running" – Individual assessments are being run.

        • "starting" – The assessment run is starting, but resources are not yet being provisioned for individual assessments.

      • ReplicationTaskAssessmentRunCreationDate (datetime) --

        Date on which the assessment run was created using the StartReplicationTaskAssessmentRun operation.

      • AssessmentProgress (dict) --

        Indication of the completion progress for the individual assessments specified to run.

        • IndividualAssessmentCount (integer) --

          The number of individual assessments that are specified to run.

        • IndividualAssessmentCompletedCount (integer) --

          The number of individual assessments that have completed, successfully or not.

      • LastFailureMessage (string) --

        Last message generated by an individual assessment failure.

      • ServiceAccessRoleArn (string) --

        ARN of the service role used to start the assessment run using the StartReplicationTaskAssessmentRun operation.

      • ResultLocationBucket (string) --

        Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultLocationFolder (string) --

        Folder in an Amazon S3 bucket where AWS DMS stores the results of this assessment run.

      • ResultEncryptionMode (string) --

        Encryption mode used to encrypt the assessment run results.

      • ResultKmsKeyArn (string) --

        ARN of the AWS KMS encryption key used to encrypt the assessment run results.

      • AssessmentRunName (string) --

        Unique name of the assessment run.

DescribeReplicationTaskIndividualAssessments (new) Link ¶

Returns a paginated list of individual assessments based on filter settings.

These filter settings can specify a combination of premigration assessment runs, migration tasks, and assessment status values.

See also: AWS API Documentation

Request Syntax

client.describe_replication_task_individual_assessments(
    Filters=[
        {
            'Name': 'string',
            'Values': [
                'string',
            ]
        },
    ],
    MaxRecords=123,
    Marker='string'
)
type Filters:

list

param Filters:

Filters applied to the individual assessments described in the form of key-value pairs.

Valid filter names: replication-task-assessment-run-arn, replication-task-arn, status

  • (dict) --

    Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe* or similar operation.

    • Name (string) -- [REQUIRED]

      The name of the filter as specified for a Describe* or similar operation.

    • Values (list) -- [REQUIRED]

      The filter value, which can specify one or more values used to narrow the returned results.

      • (string) --

type MaxRecords:

integer

param MaxRecords:

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

type Marker:

string

param Marker:

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

rtype:

dict

returns:

Response Syntax

{
    'Marker': 'string',
    'ReplicationTaskIndividualAssessments': [
        {
            'ReplicationTaskIndividualAssessmentArn': 'string',
            'ReplicationTaskAssessmentRunArn': 'string',
            'IndividualAssessmentName': 'string',
            'Status': 'string',
            'ReplicationTaskIndividualAssessmentStartDate': datetime(2015, 1, 1)
        },
    ]
}

Response Structure

  • (dict) --

    • Marker (string) --

      A pagination token returned for you to pass to a subsequent request. If you pass this token as the Marker value in a subsequent request, the response includes only records beyond the marker, up to the value specified in the request by MaxRecords.

    • ReplicationTaskIndividualAssessments (list) --

      One or more individual assessments as specified by Filters.

      • (dict) --

        Provides information that describes an individual assessment from a premigration assessment run.

        • ReplicationTaskIndividualAssessmentArn (string) --

          Amazon Resource Name (ARN) of this individual assessment.

        • ReplicationTaskAssessmentRunArn (string) --

          ARN of the premigration assessment run that is created to run this individual assessment.

        • IndividualAssessmentName (string) --

          Name of this individual assessment.

        • Status (string) --

          Individual assessment status.

          This status can have one of the following values:

          • "cancelled"

          • "error"

          • "failed"

          • "passed"

          • "pending"

          • "running"

        • ReplicationTaskIndividualAssessmentStartDate (datetime) --

          Date when this individual assessment was started as part of running the StartReplicationTaskAssessmentRun operation.

CreateEndpoint (updated) Link ¶
Changes (request, response)
Request
{'IBMDb2Settings': {'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'ServerName': 'string',
                    'Username': 'string'},
 'KafkaSettings': {'IncludeControlDetails': 'boolean',
                   'IncludePartitionValue': 'boolean',
                   'IncludeTableAlterOperations': 'boolean',
                   'IncludeTransactionDetails': 'boolean',
                   'MessageFormat': 'json | json-unformatted',
                   'PartitionIncludeSchemaTable': 'boolean'},
 'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                'Password': 'string',
                                'Port': 'integer',
                                'ServerName': 'string',
                                'Username': 'string'},
 'MySQLSettings': {'DatabaseName': 'string',
                   'Password': 'string',
                   'Port': 'integer',
                   'ServerName': 'string',
                   'Username': 'string'},
 'OracleSettings': {'AsmPassword': 'string',
                    'AsmServer': 'string',
                    'AsmUser': 'string',
                    'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'SecurityDbEncryption': 'string',
                    'SecurityDbEncryptionName': 'string',
                    'ServerName': 'string',
                    'Username': 'string'},
 'PostgreSQLSettings': {'DatabaseName': 'string',
                        'Password': 'string',
                        'Port': 'integer',
                        'ServerName': 'string',
                        'Username': 'string'},
 'SybaseSettings': {'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'ServerName': 'string',
                    'Username': 'string'}}
Response
{'Endpoint': {'IBMDb2Settings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'KafkaSettings': {'IncludeControlDetails': 'boolean',
                                'IncludePartitionValue': 'boolean',
                                'IncludeTableAlterOperations': 'boolean',
                                'IncludeTransactionDetails': 'boolean',
                                'MessageFormat': 'json | json-unformatted',
                                'PartitionIncludeSchemaTable': 'boolean'},
              'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                             'Password': 'string',
                                             'Port': 'integer',
                                             'ServerName': 'string',
                                             'Username': 'string'},
              'MySQLSettings': {'DatabaseName': 'string',
                                'Password': 'string',
                                'Port': 'integer',
                                'ServerName': 'string',
                                'Username': 'string'},
              'OracleSettings': {'AsmPassword': 'string',
                                 'AsmServer': 'string',
                                 'AsmUser': 'string',
                                 'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'SecurityDbEncryption': 'string',
                                 'SecurityDbEncryptionName': 'string',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'PostgreSQLSettings': {'DatabaseName': 'string',
                                     'Password': 'string',
                                     'Port': 'integer',
                                     'ServerName': 'string',
                                     'Username': 'string'},
              'SybaseSettings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'}}}

Creates an endpoint using the provided settings.

See also: AWS API Documentation

Request Syntax

client.create_endpoint(
    EndpointIdentifier='string',
    EndpointType='source'|'target',
    EngineName='string',
    Username='string',
    Password='string',
    ServerName='string',
    Port=123,
    DatabaseName='string',
    ExtraConnectionAttributes='string',
    KmsKeyId='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    CertificateArn='string',
    SslMode='none'|'require'|'verify-ca'|'verify-full',
    ServiceAccessRoleArn='string',
    ExternalTableDefinition='string',
    DynamoDbSettings={
        'ServiceAccessRoleArn': 'string'
    },
    S3Settings={
        'ServiceAccessRoleArn': 'string',
        'ExternalTableDefinition': 'string',
        'CsvRowDelimiter': 'string',
        'CsvDelimiter': 'string',
        'BucketFolder': 'string',
        'BucketName': 'string',
        'CompressionType': 'none'|'gzip',
        'EncryptionMode': 'sse-s3'|'sse-kms',
        'ServerSideEncryptionKmsKeyId': 'string',
        'DataFormat': 'csv'|'parquet',
        'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
        'DictPageSizeLimit': 123,
        'RowGroupLength': 123,
        'DataPageSize': 123,
        'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
        'EnableStatistics': True|False,
        'IncludeOpForFullLoad': True|False,
        'CdcInsertsOnly': True|False,
        'TimestampColumnName': 'string',
        'ParquetTimestampInMillisecond': True|False,
        'CdcInsertsAndUpdates': True|False
    },
    DmsTransferSettings={
        'ServiceAccessRoleArn': 'string',
        'BucketName': 'string'
    },
    MongoDbSettings={
        'Username': 'string',
        'Password': 'string',
        'ServerName': 'string',
        'Port': 123,
        'DatabaseName': 'string',
        'AuthType': 'no'|'password',
        'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
        'NestingLevel': 'none'|'one',
        'ExtractDocId': 'string',
        'DocsToInvestigate': 'string',
        'AuthSource': 'string',
        'KmsKeyId': 'string'
    },
    KinesisSettings={
        'StreamArn': 'string',
        'MessageFormat': 'json'|'json-unformatted',
        'ServiceAccessRoleArn': 'string',
        'IncludeTransactionDetails': True|False,
        'IncludePartitionValue': True|False,
        'PartitionIncludeSchemaTable': True|False,
        'IncludeTableAlterOperations': True|False,
        'IncludeControlDetails': True|False
    },
    KafkaSettings={
        'Broker': 'string',
        'Topic': 'string',
        'MessageFormat': 'json'|'json-unformatted',
        'IncludeTransactionDetails': True|False,
        'IncludePartitionValue': True|False,
        'PartitionIncludeSchemaTable': True|False,
        'IncludeTableAlterOperations': True|False,
        'IncludeControlDetails': True|False
    },
    ElasticsearchSettings={
        'ServiceAccessRoleArn': 'string',
        'EndpointUri': 'string',
        'FullLoadErrorPercentage': 123,
        'ErrorRetryDuration': 123
    },
    NeptuneSettings={
        'ServiceAccessRoleArn': 'string',
        'S3BucketName': 'string',
        'S3BucketFolder': 'string',
        'ErrorRetryDuration': 123,
        'MaxFileSize': 123,
        'MaxRetryCount': 123,
        'IamAuthEnabled': True|False
    },
    RedshiftSettings={
        'AcceptAnyDate': True|False,
        'AfterConnectScript': 'string',
        'BucketFolder': 'string',
        'BucketName': 'string',
        'ConnectionTimeout': 123,
        'DatabaseName': 'string',
        'DateFormat': 'string',
        'EmptyAsNull': True|False,
        'EncryptionMode': 'sse-s3'|'sse-kms',
        'FileTransferUploadStreams': 123,
        'LoadTimeout': 123,
        'MaxFileSize': 123,
        'Password': 'string',
        'Port': 123,
        'RemoveQuotes': True|False,
        'ReplaceInvalidChars': 'string',
        'ReplaceChars': 'string',
        'ServerName': 'string',
        'ServiceAccessRoleArn': 'string',
        'ServerSideEncryptionKmsKeyId': 'string',
        'TimeFormat': 'string',
        'TrimBlanks': True|False,
        'TruncateColumns': True|False,
        'Username': 'string',
        'WriteBufferSize': 123
    },
    PostgreSQLSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    MySQLSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    OracleSettings={
        'AsmPassword': 'string',
        'AsmServer': 'string',
        'AsmUser': 'string',
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'SecurityDbEncryption': 'string',
        'SecurityDbEncryptionName': 'string',
        'ServerName': 'string',
        'Username': 'string'
    },
    SybaseSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    MicrosoftSQLServerSettings={
        'Port': 123,
        'DatabaseName': 'string',
        'Password': 'string',
        'ServerName': 'string',
        'Username': 'string'
    },
    IBMDb2Settings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    }
)
type EndpointIdentifier:

string

param EndpointIdentifier:

[REQUIRED]

The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

type EndpointType:

string

param EndpointType:

[REQUIRED]

The type of endpoint. Valid values are source and target.

type EngineName:

string

param EngineName:

[REQUIRED]

The type of engine for the endpoint. Valid values, depending on the EndpointType value, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

type Username:

string

param Username:

The user name to be used to log in to the endpoint database.

type Password:

string

param Password:

The password to be used to log in to the endpoint database.

type ServerName:

string

param ServerName:

The name of the server where the endpoint database resides.

type Port:

integer

param Port:

The port used by the endpoint database.

type DatabaseName:

string

param DatabaseName:

The name of the endpoint database.

type ExtraConnectionAttributes:

string

param ExtraConnectionAttributes:

Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with AWS DMS Endpoints in the AWS Database Migration Service User Guide.

type KmsKeyId:

string

param KmsKeyId:

An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.

If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.

AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

type Tags:

list

param Tags:

One or more tags to be assigned to the endpoint.

  • (dict) --

    A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:

    • AddTagsToResource

    • ListTagsForResource

    • RemoveTagsFromResource

    • Key (string) --

      A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").

    • Value (string) --

      A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").

type CertificateArn:

string

param CertificateArn:

The Amazon Resource Name (ARN) for the certificate.

type SslMode:

string

param SslMode:

The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none

type ServiceAccessRoleArn:

string

param ServiceAccessRoleArn:

The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint.

type ExternalTableDefinition:

string

param ExternalTableDefinition:

The external table definition.

type DynamoDbSettings:

dict

param DynamoDbSettings:

Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) used by the service access IAM role.

type S3Settings:

dict

param S3Settings:

Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) used by the service access IAM role.

  • ExternalTableDefinition (string) --

    The external table definition.

  • CsvRowDelimiter (string) --

    The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

  • CsvDelimiter (string) --

    The delimiter used to separate columns in the source files. The default is a comma.

  • BucketFolder (string) --

    An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

  • BucketName (string) --

    The name of the S3 bucket.

  • CompressionType (string) --

    An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

  • EncryptionMode (string) --

    The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

    • s3:CreateBucket

    • s3:ListBucket

    • s3:DeleteBucket

    • s3:GetBucketLocation

    • s3:GetObject

    • s3:PutObject

    • s3:DeleteObject

    • s3:GetObjectVersion

    • s3:GetBucketPolicy

    • s3:PutBucketPolicy

    • s3:DeleteBucketPolicy

  • ServerSideEncryptionKmsKeyId (string) --

    If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

    Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

  • DataFormat (string) --

    The format of the data that you want to use for output. You can choose one of the following:

    • csv : This is a row-based file format with comma-separated values (.csv).

    • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

  • EncodingType (string) --

    The type of encoding you are using:

    • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

    • PLAIN doesn't use encoding at all. Values are stored as they are.

    • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

  • DictPageSizeLimit (integer) --

    The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

  • RowGroupLength (integer) --

    The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

    If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

  • DataPageSize (integer) --

    The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

  • ParquetVersion (string) --

    The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

  • EnableStatistics (boolean) --

    A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

  • IncludeOpForFullLoad (boolean) --

    A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

    For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

  • CdcInsertsOnly (boolean) --

    A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

    If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

  • TimestampColumnName (string) --

    A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

    DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

    For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

    For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

    The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

    When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

  • ParquetTimestampInMillisecond (boolean) --

    A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

    When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

    Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

  • CdcInsertsAndUpdates (boolean) --

    A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

    For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

type DmsTransferSettings:

dict

param DmsTransferSettings:

The settings in JSON format for the DMS transfer type of source endpoint.

Possible settings include the following:

  • ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.

  • BucketName - The name of the S3 bucket to use.

  • CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.

Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string

JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

  • ServiceAccessRoleArn (string) --

    The IAM role that has permission to access the Amazon S3 bucket.

  • BucketName (string) --

    The name of the S3 bucket to use.

type MongoDbSettings:

dict

param MongoDbSettings:

Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • Username (string) --

    The user name you use to access the MongoDB source endpoint.

  • Password (string) --

    The password for the user account you use to access the MongoDB source endpoint.

  • ServerName (string) --

    The name of the server on the MongoDB source endpoint.

  • Port (integer) --

    The port value for the MongoDB source endpoint.

  • DatabaseName (string) --

    The database name on the MongoDB source endpoint.

  • AuthType (string) --

    The authentication type you use to access the MongoDB source endpoint.

    When when set to "no", user name and password parameters are not used and can be empty.

  • AuthMechanism (string) --

    The authentication mechanism you use to access the MongoDB source endpoint.

    For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

  • NestingLevel (string) --

    Specifies either document or table mode.

    Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

  • ExtractDocId (string) --

    Specifies the document ID. Use this setting when NestingLevel is set to "none".

    Default value is "false".

  • DocsToInvestigate (string) --

    Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

    Must be a positive value greater than 0. Default value is 1000.

  • AuthSource (string) --

    The MongoDB database name. This setting isn't used when AuthType is set to "no".

    The default is "admin".

  • KmsKeyId (string) --

    The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

type KinesisSettings:

dict

param KinesisSettings:

Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • StreamArn (string) --

    The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

  • MessageFormat (string) --

    The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

  • IncludeTransactionDetails (boolean) --

    Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

  • IncludePartitionValue (boolean) --

    Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

  • PartitionIncludeSchemaTable (boolean) --

    Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

  • IncludeTableAlterOperations (boolean) --

    Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

  • IncludeControlDetails (boolean) --

    Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

type KafkaSettings:

dict

param KafkaSettings:

Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • Broker (string) --

    The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

  • Topic (string) --

    The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

  • MessageFormat (string) --

    The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

  • IncludeTransactionDetails (boolean) --

    Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

  • IncludePartitionValue (boolean) --

    Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

  • PartitionIncludeSchemaTable (boolean) --

    Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

  • IncludeTableAlterOperations (boolean) --

    Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

  • IncludeControlDetails (boolean) --

    Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

type ElasticsearchSettings:

dict

param ElasticsearchSettings:

Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) used by service to access the IAM role.

  • EndpointUri (string) -- [REQUIRED]

    The endpoint for the Elasticsearch cluster.

  • FullLoadErrorPercentage (integer) --

    The maximum percentage of records that can fail to be written before a full load operation stops.

  • ErrorRetryDuration (integer) --

    The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

type NeptuneSettings:

dict

param NeptuneSettings:

Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying Endpoint Settings for Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

  • S3BucketName (string) -- [REQUIRED]

    The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

  • S3BucketFolder (string) -- [REQUIRED]

    A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

  • ErrorRetryDuration (integer) --

    The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

  • MaxFileSize (integer) --

    The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

  • MaxRetryCount (integer) --

    The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

  • IamAuthEnabled (boolean) --

    If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

type RedshiftSettings:

dict

param RedshiftSettings:

Provides information that defines an Amazon Redshift endpoint.

  • AcceptAnyDate (boolean) --

    A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

    This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

  • AfterConnectScript (string) --

    Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

  • BucketFolder (string) --

    The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

  • BucketName (string) --

    The name of the S3 bucket you want to use

  • ConnectionTimeout (integer) --

    A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

  • DatabaseName (string) --

    The name of the Amazon Redshift data warehouse (service) that you are working with.

  • DateFormat (string) --

    The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

    If your date and time values use formats different from each other, set this to auto.

  • EmptyAsNull (boolean) --

    A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

  • EncryptionMode (string) --

    The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

  • FileTransferUploadStreams (integer) --

    The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

  • LoadTimeout (integer) --

    The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

  • MaxFileSize (integer) --

    The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

  • Password (string) --

    The password for the user named in the username property.

  • Port (integer) --

    The port number for Amazon Redshift. The default value is 5439.

  • RemoveQuotes (boolean) --

    A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

  • ReplaceInvalidChars (string) --

    A list of characters that you want to replace. Use with ReplaceChars.

  • ReplaceChars (string) --

    A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

  • ServerName (string) --

    The name of the Amazon Redshift cluster you are using.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

  • ServerSideEncryptionKmsKeyId (string) --

    The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

  • TimeFormat (string) --

    The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

    If your date and time values use formats different from each other, set this parameter to auto.

  • TrimBlanks (boolean) --

    A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

  • TruncateColumns (boolean) --

    A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

  • Username (string) --

    An Amazon Redshift user name for a registered user.

  • WriteBufferSize (integer) --

    The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

type PostgreSQLSettings:

dict

param PostgreSQLSettings:

Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type MySQLSettings:

dict

param MySQLSettings:

Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type OracleSettings:

dict

param OracleSettings:

Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • AsmPassword (string) --

    For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • AsmServer (string) --

    For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • AsmUser (string) --

    For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • SecurityDbEncryption (string) --

    For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • SecurityDbEncryptionName (string) --

    For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type SybaseSettings:

dict

param SybaseSettings:

Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type MicrosoftSQLServerSettings:

dict

param MicrosoftSQLServerSettings:

Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • Port (integer) --

    Endpoint TCP port.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type IBMDb2Settings:

dict

param IBMDb2Settings:

Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

rtype:

dict

returns:

Response Syntax

{
    'Endpoint': {
        'EndpointIdentifier': 'string',
        'EndpointType': 'source'|'target',
        'EngineName': 'string',
        'EngineDisplayName': 'string',
        'Username': 'string',
        'ServerName': 'string',
        'Port': 123,
        'DatabaseName': 'string',
        'ExtraConnectionAttributes': 'string',
        'Status': 'string',
        'KmsKeyId': 'string',
        'EndpointArn': 'string',
        'CertificateArn': 'string',
        'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
        'ServiceAccessRoleArn': 'string',
        'ExternalTableDefinition': 'string',
        'ExternalId': 'string',
        'DynamoDbSettings': {
            'ServiceAccessRoleArn': 'string'
        },
        'S3Settings': {
            'ServiceAccessRoleArn': 'string',
            'ExternalTableDefinition': 'string',
            'CsvRowDelimiter': 'string',
            'CsvDelimiter': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'CompressionType': 'none'|'gzip',
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'ServerSideEncryptionKmsKeyId': 'string',
            'DataFormat': 'csv'|'parquet',
            'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
            'DictPageSizeLimit': 123,
            'RowGroupLength': 123,
            'DataPageSize': 123,
            'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
            'EnableStatistics': True|False,
            'IncludeOpForFullLoad': True|False,
            'CdcInsertsOnly': True|False,
            'TimestampColumnName': 'string',
            'ParquetTimestampInMillisecond': True|False,
            'CdcInsertsAndUpdates': True|False
        },
        'DmsTransferSettings': {
            'ServiceAccessRoleArn': 'string',
            'BucketName': 'string'
        },
        'MongoDbSettings': {
            'Username': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Port': 123,
            'DatabaseName': 'string',
            'AuthType': 'no'|'password',
            'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
            'NestingLevel': 'none'|'one',
            'ExtractDocId': 'string',
            'DocsToInvestigate': 'string',
            'AuthSource': 'string',
            'KmsKeyId': 'string'
        },
        'KinesisSettings': {
            'StreamArn': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'ServiceAccessRoleArn': 'string',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'KafkaSettings': {
            'Broker': 'string',
            'Topic': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'ElasticsearchSettings': {
            'ServiceAccessRoleArn': 'string',
            'EndpointUri': 'string',
            'FullLoadErrorPercentage': 123,
            'ErrorRetryDuration': 123
        },
        'NeptuneSettings': {
            'ServiceAccessRoleArn': 'string',
            'S3BucketName': 'string',
            'S3BucketFolder': 'string',
            'ErrorRetryDuration': 123,
            'MaxFileSize': 123,
            'MaxRetryCount': 123,
            'IamAuthEnabled': True|False
        },
        'RedshiftSettings': {
            'AcceptAnyDate': True|False,
            'AfterConnectScript': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'ConnectionTimeout': 123,
            'DatabaseName': 'string',
            'DateFormat': 'string',
            'EmptyAsNull': True|False,
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'FileTransferUploadStreams': 123,
            'LoadTimeout': 123,
            'MaxFileSize': 123,
            'Password': 'string',
            'Port': 123,
            'RemoveQuotes': True|False,
            'ReplaceInvalidChars': 'string',
            'ReplaceChars': 'string',
            'ServerName': 'string',
            'ServiceAccessRoleArn': 'string',
            'ServerSideEncryptionKmsKeyId': 'string',
            'TimeFormat': 'string',
            'TrimBlanks': True|False,
            'TruncateColumns': True|False,
            'Username': 'string',
            'WriteBufferSize': 123
        },
        'PostgreSQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MySQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'OracleSettings': {
            'AsmPassword': 'string',
            'AsmServer': 'string',
            'AsmUser': 'string',
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'SecurityDbEncryption': 'string',
            'SecurityDbEncryptionName': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'SybaseSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MicrosoftSQLServerSettings': {
            'Port': 123,
            'DatabaseName': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'IBMDb2Settings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        }
    }
}

Response Structure

  • (dict) --

    • Endpoint (dict) --

      The endpoint that was created.

      • EndpointIdentifier (string) --

        The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

      • EndpointType (string) --

        The type of endpoint. Valid values are source and target.

      • EngineName (string) --

        The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

      • EngineDisplayName (string) --

        The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."

      • Username (string) --

        The user name used to connect to the endpoint.

      • ServerName (string) --

        The name of the server at the endpoint.

      • Port (integer) --

        The port value used to access the endpoint.

      • DatabaseName (string) --

        The name of the database at the endpoint.

      • ExtraConnectionAttributes (string) --

        Additional connection attributes used to connect to the endpoint.

      • Status (string) --

        The status of the endpoint.

      • KmsKeyId (string) --

        An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.

        If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.

        AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • EndpointArn (string) --

        The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

      • CertificateArn (string) --

        The Amazon Resource Name (ARN) used for SSL connection to the endpoint.

      • SslMode (string) --

        The SSL mode used to connect to the endpoint. The default value is none.

      • ServiceAccessRoleArn (string) --

        The Amazon Resource Name (ARN) used by the service access IAM role.

      • ExternalTableDefinition (string) --

        The external table definition.

      • ExternalId (string) --

        Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.

      • DynamoDbSettings (dict) --

        The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

      • S3Settings (dict) --

        The settings for the S3 target endpoint. For more information, see the S3Settings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

        • ExternalTableDefinition (string) --

          The external table definition.

        • CsvRowDelimiter (string) --

          The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

        • CsvDelimiter (string) --

          The delimiter used to separate columns in the source files. The default is a comma.

        • BucketFolder (string) --

          An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

        • BucketName (string) --

          The name of the S3 bucket.

        • CompressionType (string) --

          An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

          • s3:CreateBucket

          • s3:ListBucket

          • s3:DeleteBucket

          • s3:GetBucketLocation

          • s3:GetObject

          • s3:PutObject

          • s3:DeleteObject

          • s3:GetObjectVersion

          • s3:GetBucketPolicy

          • s3:PutBucketPolicy

          • s3:DeleteBucketPolicy

        • ServerSideEncryptionKmsKeyId (string) --

          If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

          Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

        • DataFormat (string) --

          The format of the data that you want to use for output. You can choose one of the following:

          • csv : This is a row-based file format with comma-separated values (.csv).

          • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

        • EncodingType (string) --

          The type of encoding you are using:

          • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

          • PLAIN doesn't use encoding at all. Values are stored as they are.

          • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

        • DictPageSizeLimit (integer) --

          The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

        • RowGroupLength (integer) --

          The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

          If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

        • DataPageSize (integer) --

          The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

        • ParquetVersion (string) --

          The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

        • EnableStatistics (boolean) --

          A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

        • IncludeOpForFullLoad (boolean) --

          A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

          For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

        • CdcInsertsOnly (boolean) --

          A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

          If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

        • TimestampColumnName (string) --

          A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

          DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

          For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

          For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

          The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

          When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

        • ParquetTimestampInMillisecond (boolean) --

          A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

          When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

          Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

        • CdcInsertsAndUpdates (boolean) --

          A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

          For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

      • DmsTransferSettings (dict) --

        The settings in JSON format for the DMS transfer type of source endpoint.

        Possible settings include the following:

        • ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName - The name of the S3 bucket to use.

        • CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.

        Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string

        JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

        • ServiceAccessRoleArn (string) --

          The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket to use.

      • MongoDbSettings (dict) --

        The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.

        • Username (string) --

          The user name you use to access the MongoDB source endpoint.

        • Password (string) --

          The password for the user account you use to access the MongoDB source endpoint.

        • ServerName (string) --

          The name of the server on the MongoDB source endpoint.

        • Port (integer) --

          The port value for the MongoDB source endpoint.

        • DatabaseName (string) --

          The database name on the MongoDB source endpoint.

        • AuthType (string) --

          The authentication type you use to access the MongoDB source endpoint.

          When when set to "no", user name and password parameters are not used and can be empty.

        • AuthMechanism (string) --

          The authentication mechanism you use to access the MongoDB source endpoint.

          For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

        • NestingLevel (string) --

          Specifies either document or table mode.

          Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

        • ExtractDocId (string) --

          Specifies the document ID. Use this setting when NestingLevel is set to "none".

          Default value is "false".

        • DocsToInvestigate (string) --

          Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

          Must be a positive value greater than 0. Default value is 1000.

        • AuthSource (string) --

          The MongoDB database name. This setting isn't used when AuthType is set to "no".

          The default is "admin".

        • KmsKeyId (string) --

          The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • KinesisSettings (dict) --

        The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.

        • StreamArn (string) --

          The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

      • KafkaSettings (dict) --

        The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.

        • Broker (string) --

          The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

        • Topic (string) --

          The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

      • ElasticsearchSettings (dict) --

        The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by service to access the IAM role.

        • EndpointUri (string) --

          The endpoint for the Elasticsearch cluster.

        • FullLoadErrorPercentage (integer) --

          The maximum percentage of records that can fail to be written before a full load operation stops.

        • ErrorRetryDuration (integer) --

          The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

      • NeptuneSettings (dict) --

        The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

        • S3BucketName (string) --

          The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

        • S3BucketFolder (string) --

          A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

        • ErrorRetryDuration (integer) --

          The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

        • MaxFileSize (integer) --

          The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

        • MaxRetryCount (integer) --

          The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

        • IamAuthEnabled (boolean) --

          If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

      • RedshiftSettings (dict) --

        Settings for the Amazon Redshift endpoint.

        • AcceptAnyDate (boolean) --

          A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

          This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

        • AfterConnectScript (string) --

          Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

        • BucketFolder (string) --

          The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket you want to use

        • ConnectionTimeout (integer) --

          A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

        • DatabaseName (string) --

          The name of the Amazon Redshift data warehouse (service) that you are working with.

        • DateFormat (string) --

          The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

          If your date and time values use formats different from each other, set this to auto.

        • EmptyAsNull (boolean) --

          A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

        • FileTransferUploadStreams (integer) --

          The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

        • LoadTimeout (integer) --

          The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

        • MaxFileSize (integer) --

          The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

        • Password (string) --

          The password for the user named in the username property.

        • Port (integer) --

          The port number for Amazon Redshift. The default value is 5439.

        • RemoveQuotes (boolean) --

          A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

        • ReplaceInvalidChars (string) --

          A list of characters that you want to replace. Use with ReplaceChars.

        • ReplaceChars (string) --

          A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

        • ServerName (string) --

          The name of the Amazon Redshift cluster you are using.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

        • ServerSideEncryptionKmsKeyId (string) --

          The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

        • TimeFormat (string) --

          The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

          If your date and time values use formats different from each other, set this parameter to auto.

        • TrimBlanks (boolean) --

          A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

        • TruncateColumns (boolean) --

          A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

        • Username (string) --

          An Amazon Redshift user name for a registered user.

        • WriteBufferSize (integer) --

          The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

      • PostgreSQLSettings (dict) --

        The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MySQLSettings (dict) --

        The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • OracleSettings (dict) --

        The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.

        • AsmPassword (string) --

          For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmServer (string) --

          For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmUser (string) --

          For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • SecurityDbEncryption (string) --

          For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • SecurityDbEncryptionName (string) --

          For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • SybaseSettings (dict) --

        The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MicrosoftSQLServerSettings (dict) --

        The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.

        • Port (integer) --

          Endpoint TCP port.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • IBMDb2Settings (dict) --

        The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

DeleteEndpoint (updated) Link ¶
Changes (response)
{'Endpoint': {'IBMDb2Settings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'KafkaSettings': {'IncludeControlDetails': 'boolean',
                                'IncludePartitionValue': 'boolean',
                                'IncludeTableAlterOperations': 'boolean',
                                'IncludeTransactionDetails': 'boolean',
                                'MessageFormat': 'json | json-unformatted',
                                'PartitionIncludeSchemaTable': 'boolean'},
              'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                             'Password': 'string',
                                             'Port': 'integer',
                                             'ServerName': 'string',
                                             'Username': 'string'},
              'MySQLSettings': {'DatabaseName': 'string',
                                'Password': 'string',
                                'Port': 'integer',
                                'ServerName': 'string',
                                'Username': 'string'},
              'OracleSettings': {'AsmPassword': 'string',
                                 'AsmServer': 'string',
                                 'AsmUser': 'string',
                                 'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'SecurityDbEncryption': 'string',
                                 'SecurityDbEncryptionName': 'string',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'PostgreSQLSettings': {'DatabaseName': 'string',
                                     'Password': 'string',
                                     'Port': 'integer',
                                     'ServerName': 'string',
                                     'Username': 'string'},
              'SybaseSettings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'}}}

Deletes the specified endpoint.

See also: AWS API Documentation

Request Syntax

client.delete_endpoint(
    EndpointArn='string'
)
type EndpointArn:

string

param EndpointArn:

[REQUIRED]

The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

rtype:

dict

returns:

Response Syntax

{
    'Endpoint': {
        'EndpointIdentifier': 'string',
        'EndpointType': 'source'|'target',
        'EngineName': 'string',
        'EngineDisplayName': 'string',
        'Username': 'string',
        'ServerName': 'string',
        'Port': 123,
        'DatabaseName': 'string',
        'ExtraConnectionAttributes': 'string',
        'Status': 'string',
        'KmsKeyId': 'string',
        'EndpointArn': 'string',
        'CertificateArn': 'string',
        'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
        'ServiceAccessRoleArn': 'string',
        'ExternalTableDefinition': 'string',
        'ExternalId': 'string',
        'DynamoDbSettings': {
            'ServiceAccessRoleArn': 'string'
        },
        'S3Settings': {
            'ServiceAccessRoleArn': 'string',
            'ExternalTableDefinition': 'string',
            'CsvRowDelimiter': 'string',
            'CsvDelimiter': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'CompressionType': 'none'|'gzip',
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'ServerSideEncryptionKmsKeyId': 'string',
            'DataFormat': 'csv'|'parquet',
            'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
            'DictPageSizeLimit': 123,
            'RowGroupLength': 123,
            'DataPageSize': 123,
            'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
            'EnableStatistics': True|False,
            'IncludeOpForFullLoad': True|False,
            'CdcInsertsOnly': True|False,
            'TimestampColumnName': 'string',
            'ParquetTimestampInMillisecond': True|False,
            'CdcInsertsAndUpdates': True|False
        },
        'DmsTransferSettings': {
            'ServiceAccessRoleArn': 'string',
            'BucketName': 'string'
        },
        'MongoDbSettings': {
            'Username': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Port': 123,
            'DatabaseName': 'string',
            'AuthType': 'no'|'password',
            'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
            'NestingLevel': 'none'|'one',
            'ExtractDocId': 'string',
            'DocsToInvestigate': 'string',
            'AuthSource': 'string',
            'KmsKeyId': 'string'
        },
        'KinesisSettings': {
            'StreamArn': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'ServiceAccessRoleArn': 'string',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'KafkaSettings': {
            'Broker': 'string',
            'Topic': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'ElasticsearchSettings': {
            'ServiceAccessRoleArn': 'string',
            'EndpointUri': 'string',
            'FullLoadErrorPercentage': 123,
            'ErrorRetryDuration': 123
        },
        'NeptuneSettings': {
            'ServiceAccessRoleArn': 'string',
            'S3BucketName': 'string',
            'S3BucketFolder': 'string',
            'ErrorRetryDuration': 123,
            'MaxFileSize': 123,
            'MaxRetryCount': 123,
            'IamAuthEnabled': True|False
        },
        'RedshiftSettings': {
            'AcceptAnyDate': True|False,
            'AfterConnectScript': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'ConnectionTimeout': 123,
            'DatabaseName': 'string',
            'DateFormat': 'string',
            'EmptyAsNull': True|False,
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'FileTransferUploadStreams': 123,
            'LoadTimeout': 123,
            'MaxFileSize': 123,
            'Password': 'string',
            'Port': 123,
            'RemoveQuotes': True|False,
            'ReplaceInvalidChars': 'string',
            'ReplaceChars': 'string',
            'ServerName': 'string',
            'ServiceAccessRoleArn': 'string',
            'ServerSideEncryptionKmsKeyId': 'string',
            'TimeFormat': 'string',
            'TrimBlanks': True|False,
            'TruncateColumns': True|False,
            'Username': 'string',
            'WriteBufferSize': 123
        },
        'PostgreSQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MySQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'OracleSettings': {
            'AsmPassword': 'string',
            'AsmServer': 'string',
            'AsmUser': 'string',
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'SecurityDbEncryption': 'string',
            'SecurityDbEncryptionName': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'SybaseSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MicrosoftSQLServerSettings': {
            'Port': 123,
            'DatabaseName': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'IBMDb2Settings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        }
    }
}

Response Structure

  • (dict) --

    • Endpoint (dict) --

      The endpoint that was deleted.

      • EndpointIdentifier (string) --

        The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

      • EndpointType (string) --

        The type of endpoint. Valid values are source and target.

      • EngineName (string) --

        The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

      • EngineDisplayName (string) --

        The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."

      • Username (string) --

        The user name used to connect to the endpoint.

      • ServerName (string) --

        The name of the server at the endpoint.

      • Port (integer) --

        The port value used to access the endpoint.

      • DatabaseName (string) --

        The name of the database at the endpoint.

      • ExtraConnectionAttributes (string) --

        Additional connection attributes used to connect to the endpoint.

      • Status (string) --

        The status of the endpoint.

      • KmsKeyId (string) --

        An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.

        If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.

        AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • EndpointArn (string) --

        The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

      • CertificateArn (string) --

        The Amazon Resource Name (ARN) used for SSL connection to the endpoint.

      • SslMode (string) --

        The SSL mode used to connect to the endpoint. The default value is none.

      • ServiceAccessRoleArn (string) --

        The Amazon Resource Name (ARN) used by the service access IAM role.

      • ExternalTableDefinition (string) --

        The external table definition.

      • ExternalId (string) --

        Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.

      • DynamoDbSettings (dict) --

        The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

      • S3Settings (dict) --

        The settings for the S3 target endpoint. For more information, see the S3Settings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

        • ExternalTableDefinition (string) --

          The external table definition.

        • CsvRowDelimiter (string) --

          The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

        • CsvDelimiter (string) --

          The delimiter used to separate columns in the source files. The default is a comma.

        • BucketFolder (string) --

          An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

        • BucketName (string) --

          The name of the S3 bucket.

        • CompressionType (string) --

          An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

          • s3:CreateBucket

          • s3:ListBucket

          • s3:DeleteBucket

          • s3:GetBucketLocation

          • s3:GetObject

          • s3:PutObject

          • s3:DeleteObject

          • s3:GetObjectVersion

          • s3:GetBucketPolicy

          • s3:PutBucketPolicy

          • s3:DeleteBucketPolicy

        • ServerSideEncryptionKmsKeyId (string) --

          If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

          Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

        • DataFormat (string) --

          The format of the data that you want to use for output. You can choose one of the following:

          • csv : This is a row-based file format with comma-separated values (.csv).

          • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

        • EncodingType (string) --

          The type of encoding you are using:

          • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

          • PLAIN doesn't use encoding at all. Values are stored as they are.

          • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

        • DictPageSizeLimit (integer) --

          The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

        • RowGroupLength (integer) --

          The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

          If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

        • DataPageSize (integer) --

          The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

        • ParquetVersion (string) --

          The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

        • EnableStatistics (boolean) --

          A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

        • IncludeOpForFullLoad (boolean) --

          A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

          For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

        • CdcInsertsOnly (boolean) --

          A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

          If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

        • TimestampColumnName (string) --

          A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

          DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

          For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

          For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

          The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

          When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

        • ParquetTimestampInMillisecond (boolean) --

          A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

          When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

          Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

        • CdcInsertsAndUpdates (boolean) --

          A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

          For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

      • DmsTransferSettings (dict) --

        The settings in JSON format for the DMS transfer type of source endpoint.

        Possible settings include the following:

        • ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName - The name of the S3 bucket to use.

        • CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.

        Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string

        JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

        • ServiceAccessRoleArn (string) --

          The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket to use.

      • MongoDbSettings (dict) --

        The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.

        • Username (string) --

          The user name you use to access the MongoDB source endpoint.

        • Password (string) --

          The password for the user account you use to access the MongoDB source endpoint.

        • ServerName (string) --

          The name of the server on the MongoDB source endpoint.

        • Port (integer) --

          The port value for the MongoDB source endpoint.

        • DatabaseName (string) --

          The database name on the MongoDB source endpoint.

        • AuthType (string) --

          The authentication type you use to access the MongoDB source endpoint.

          When when set to "no", user name and password parameters are not used and can be empty.

        • AuthMechanism (string) --

          The authentication mechanism you use to access the MongoDB source endpoint.

          For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

        • NestingLevel (string) --

          Specifies either document or table mode.

          Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

        • ExtractDocId (string) --

          Specifies the document ID. Use this setting when NestingLevel is set to "none".

          Default value is "false".

        • DocsToInvestigate (string) --

          Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

          Must be a positive value greater than 0. Default value is 1000.

        • AuthSource (string) --

          The MongoDB database name. This setting isn't used when AuthType is set to "no".

          The default is "admin".

        • KmsKeyId (string) --

          The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • KinesisSettings (dict) --

        The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.

        • StreamArn (string) --

          The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

      • KafkaSettings (dict) --

        The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.

        • Broker (string) --

          The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

        • Topic (string) --

          The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

      • ElasticsearchSettings (dict) --

        The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by service to access the IAM role.

        • EndpointUri (string) --

          The endpoint for the Elasticsearch cluster.

        • FullLoadErrorPercentage (integer) --

          The maximum percentage of records that can fail to be written before a full load operation stops.

        • ErrorRetryDuration (integer) --

          The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

      • NeptuneSettings (dict) --

        The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

        • S3BucketName (string) --

          The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

        • S3BucketFolder (string) --

          A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

        • ErrorRetryDuration (integer) --

          The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

        • MaxFileSize (integer) --

          The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

        • MaxRetryCount (integer) --

          The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

        • IamAuthEnabled (boolean) --

          If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

      • RedshiftSettings (dict) --

        Settings for the Amazon Redshift endpoint.

        • AcceptAnyDate (boolean) --

          A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

          This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

        • AfterConnectScript (string) --

          Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

        • BucketFolder (string) --

          The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket you want to use

        • ConnectionTimeout (integer) --

          A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

        • DatabaseName (string) --

          The name of the Amazon Redshift data warehouse (service) that you are working with.

        • DateFormat (string) --

          The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

          If your date and time values use formats different from each other, set this to auto.

        • EmptyAsNull (boolean) --

          A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

        • FileTransferUploadStreams (integer) --

          The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

        • LoadTimeout (integer) --

          The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

        • MaxFileSize (integer) --

          The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

        • Password (string) --

          The password for the user named in the username property.

        • Port (integer) --

          The port number for Amazon Redshift. The default value is 5439.

        • RemoveQuotes (boolean) --

          A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

        • ReplaceInvalidChars (string) --

          A list of characters that you want to replace. Use with ReplaceChars.

        • ReplaceChars (string) --

          A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

        • ServerName (string) --

          The name of the Amazon Redshift cluster you are using.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

        • ServerSideEncryptionKmsKeyId (string) --

          The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

        • TimeFormat (string) --

          The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

          If your date and time values use formats different from each other, set this parameter to auto.

        • TrimBlanks (boolean) --

          A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

        • TruncateColumns (boolean) --

          A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

        • Username (string) --

          An Amazon Redshift user name for a registered user.

        • WriteBufferSize (integer) --

          The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

      • PostgreSQLSettings (dict) --

        The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MySQLSettings (dict) --

        The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • OracleSettings (dict) --

        The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.

        • AsmPassword (string) --

          For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmServer (string) --

          For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmUser (string) --

          For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • SecurityDbEncryption (string) --

          For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • SecurityDbEncryptionName (string) --

          For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • SybaseSettings (dict) --

        The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MicrosoftSQLServerSettings (dict) --

        The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.

        • Port (integer) --

          Endpoint TCP port.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • IBMDb2Settings (dict) --

        The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

DescribeEndpoints (updated) Link ¶
Changes (response)
{'Endpoints': {'IBMDb2Settings': {'DatabaseName': 'string',
                                  'Password': 'string',
                                  'Port': 'integer',
                                  'ServerName': 'string',
                                  'Username': 'string'},
               'KafkaSettings': {'IncludeControlDetails': 'boolean',
                                 'IncludePartitionValue': 'boolean',
                                 'IncludeTableAlterOperations': 'boolean',
                                 'IncludeTransactionDetails': 'boolean',
                                 'MessageFormat': 'json | json-unformatted',
                                 'PartitionIncludeSchemaTable': 'boolean'},
               'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                              'Password': 'string',
                                              'Port': 'integer',
                                              'ServerName': 'string',
                                              'Username': 'string'},
               'MySQLSettings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'},
               'OracleSettings': {'AsmPassword': 'string',
                                  'AsmServer': 'string',
                                  'AsmUser': 'string',
                                  'DatabaseName': 'string',
                                  'Password': 'string',
                                  'Port': 'integer',
                                  'SecurityDbEncryption': 'string',
                                  'SecurityDbEncryptionName': 'string',
                                  'ServerName': 'string',
                                  'Username': 'string'},
               'PostgreSQLSettings': {'DatabaseName': 'string',
                                      'Password': 'string',
                                      'Port': 'integer',
                                      'ServerName': 'string',
                                      'Username': 'string'},
               'SybaseSettings': {'DatabaseName': 'string',
                                  'Password': 'string',
                                  'Port': 'integer',
                                  'ServerName': 'string',
                                  'Username': 'string'}}}

Returns information about the endpoints for your account in the current region.

See also: AWS API Documentation

Request Syntax

client.describe_endpoints(
    Filters=[
        {
            'Name': 'string',
            'Values': [
                'string',
            ]
        },
    ],
    MaxRecords=123,
    Marker='string'
)
type Filters:

list

param Filters:

Filters applied to the endpoints.

Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name

  • (dict) --

    Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe* or similar operation.

    • Name (string) -- [REQUIRED]

      The name of the filter as specified for a Describe* or similar operation.

    • Values (list) -- [REQUIRED]

      The filter value, which can specify one or more values used to narrow the returned results.

      • (string) --

type MaxRecords:

integer

param MaxRecords:

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

Default: 100

Constraints: Minimum 20, maximum 100.

type Marker:

string

param Marker:

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

rtype:

dict

returns:

Response Syntax

{
    'Marker': 'string',
    'Endpoints': [
        {
            'EndpointIdentifier': 'string',
            'EndpointType': 'source'|'target',
            'EngineName': 'string',
            'EngineDisplayName': 'string',
            'Username': 'string',
            'ServerName': 'string',
            'Port': 123,
            'DatabaseName': 'string',
            'ExtraConnectionAttributes': 'string',
            'Status': 'string',
            'KmsKeyId': 'string',
            'EndpointArn': 'string',
            'CertificateArn': 'string',
            'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
            'ServiceAccessRoleArn': 'string',
            'ExternalTableDefinition': 'string',
            'ExternalId': 'string',
            'DynamoDbSettings': {
                'ServiceAccessRoleArn': 'string'
            },
            'S3Settings': {
                'ServiceAccessRoleArn': 'string',
                'ExternalTableDefinition': 'string',
                'CsvRowDelimiter': 'string',
                'CsvDelimiter': 'string',
                'BucketFolder': 'string',
                'BucketName': 'string',
                'CompressionType': 'none'|'gzip',
                'EncryptionMode': 'sse-s3'|'sse-kms',
                'ServerSideEncryptionKmsKeyId': 'string',
                'DataFormat': 'csv'|'parquet',
                'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
                'DictPageSizeLimit': 123,
                'RowGroupLength': 123,
                'DataPageSize': 123,
                'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
                'EnableStatistics': True|False,
                'IncludeOpForFullLoad': True|False,
                'CdcInsertsOnly': True|False,
                'TimestampColumnName': 'string',
                'ParquetTimestampInMillisecond': True|False,
                'CdcInsertsAndUpdates': True|False
            },
            'DmsTransferSettings': {
                'ServiceAccessRoleArn': 'string',
                'BucketName': 'string'
            },
            'MongoDbSettings': {
                'Username': 'string',
                'Password': 'string',
                'ServerName': 'string',
                'Port': 123,
                'DatabaseName': 'string',
                'AuthType': 'no'|'password',
                'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
                'NestingLevel': 'none'|'one',
                'ExtractDocId': 'string',
                'DocsToInvestigate': 'string',
                'AuthSource': 'string',
                'KmsKeyId': 'string'
            },
            'KinesisSettings': {
                'StreamArn': 'string',
                'MessageFormat': 'json'|'json-unformatted',
                'ServiceAccessRoleArn': 'string',
                'IncludeTransactionDetails': True|False,
                'IncludePartitionValue': True|False,
                'PartitionIncludeSchemaTable': True|False,
                'IncludeTableAlterOperations': True|False,
                'IncludeControlDetails': True|False
            },
            'KafkaSettings': {
                'Broker': 'string',
                'Topic': 'string',
                'MessageFormat': 'json'|'json-unformatted',
                'IncludeTransactionDetails': True|False,
                'IncludePartitionValue': True|False,
                'PartitionIncludeSchemaTable': True|False,
                'IncludeTableAlterOperations': True|False,
                'IncludeControlDetails': True|False
            },
            'ElasticsearchSettings': {
                'ServiceAccessRoleArn': 'string',
                'EndpointUri': 'string',
                'FullLoadErrorPercentage': 123,
                'ErrorRetryDuration': 123
            },
            'NeptuneSettings': {
                'ServiceAccessRoleArn': 'string',
                'S3BucketName': 'string',
                'S3BucketFolder': 'string',
                'ErrorRetryDuration': 123,
                'MaxFileSize': 123,
                'MaxRetryCount': 123,
                'IamAuthEnabled': True|False
            },
            'RedshiftSettings': {
                'AcceptAnyDate': True|False,
                'AfterConnectScript': 'string',
                'BucketFolder': 'string',
                'BucketName': 'string',
                'ConnectionTimeout': 123,
                'DatabaseName': 'string',
                'DateFormat': 'string',
                'EmptyAsNull': True|False,
                'EncryptionMode': 'sse-s3'|'sse-kms',
                'FileTransferUploadStreams': 123,
                'LoadTimeout': 123,
                'MaxFileSize': 123,
                'Password': 'string',
                'Port': 123,
                'RemoveQuotes': True|False,
                'ReplaceInvalidChars': 'string',
                'ReplaceChars': 'string',
                'ServerName': 'string',
                'ServiceAccessRoleArn': 'string',
                'ServerSideEncryptionKmsKeyId': 'string',
                'TimeFormat': 'string',
                'TrimBlanks': True|False,
                'TruncateColumns': True|False,
                'Username': 'string',
                'WriteBufferSize': 123
            },
            'PostgreSQLSettings': {
                'DatabaseName': 'string',
                'Password': 'string',
                'Port': 123,
                'ServerName': 'string',
                'Username': 'string'
            },
            'MySQLSettings': {
                'DatabaseName': 'string',
                'Password': 'string',
                'Port': 123,
                'ServerName': 'string',
                'Username': 'string'
            },
            'OracleSettings': {
                'AsmPassword': 'string',
                'AsmServer': 'string',
                'AsmUser': 'string',
                'DatabaseName': 'string',
                'Password': 'string',
                'Port': 123,
                'SecurityDbEncryption': 'string',
                'SecurityDbEncryptionName': 'string',
                'ServerName': 'string',
                'Username': 'string'
            },
            'SybaseSettings': {
                'DatabaseName': 'string',
                'Password': 'string',
                'Port': 123,
                'ServerName': 'string',
                'Username': 'string'
            },
            'MicrosoftSQLServerSettings': {
                'Port': 123,
                'DatabaseName': 'string',
                'Password': 'string',
                'ServerName': 'string',
                'Username': 'string'
            },
            'IBMDb2Settings': {
                'DatabaseName': 'string',
                'Password': 'string',
                'Port': 123,
                'ServerName': 'string',
                'Username': 'string'
            }
        },
    ]
}

Response Structure

  • (dict) --

    • Marker (string) --

      An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

    • Endpoints (list) --

      Endpoint description.

      • (dict) --

        Describes an endpoint of a database instance in response to operations such as the following:

        • CreateEndpoint

        • DescribeEndpoint

        • DescribeEndpointTypes

        • ModifyEndpoint

        • EndpointIdentifier (string) --

          The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

        • EndpointType (string) --

          The type of endpoint. Valid values are source and target.

        • EngineName (string) --

          The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

        • EngineDisplayName (string) --

          The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."

        • Username (string) --

          The user name used to connect to the endpoint.

        • ServerName (string) --

          The name of the server at the endpoint.

        • Port (integer) --

          The port value used to access the endpoint.

        • DatabaseName (string) --

          The name of the database at the endpoint.

        • ExtraConnectionAttributes (string) --

          Additional connection attributes used to connect to the endpoint.

        • Status (string) --

          The status of the endpoint.

        • KmsKeyId (string) --

          An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.

          If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.

          AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

        • EndpointArn (string) --

          The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

        • CertificateArn (string) --

          The Amazon Resource Name (ARN) used for SSL connection to the endpoint.

        • SslMode (string) --

          The SSL mode used to connect to the endpoint. The default value is none.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

        • ExternalTableDefinition (string) --

          The external table definition.

        • ExternalId (string) --

          Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.

        • DynamoDbSettings (dict) --

          The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) used by the service access IAM role.

        • S3Settings (dict) --

          The settings for the S3 target endpoint. For more information, see the S3Settings structure.

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) used by the service access IAM role.

          • ExternalTableDefinition (string) --

            The external table definition.

          • CsvRowDelimiter (string) --

            The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

          • CsvDelimiter (string) --

            The delimiter used to separate columns in the source files. The default is a comma.

          • BucketFolder (string) --

            An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

          • BucketName (string) --

            The name of the S3 bucket.

          • CompressionType (string) --

            An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

          • EncryptionMode (string) --

            The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

            • s3:CreateBucket

            • s3:ListBucket

            • s3:DeleteBucket

            • s3:GetBucketLocation

            • s3:GetObject

            • s3:PutObject

            • s3:DeleteObject

            • s3:GetObjectVersion

            • s3:GetBucketPolicy

            • s3:PutBucketPolicy

            • s3:DeleteBucketPolicy

          • ServerSideEncryptionKmsKeyId (string) --

            If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

            Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

          • DataFormat (string) --

            The format of the data that you want to use for output. You can choose one of the following:

            • csv : This is a row-based file format with comma-separated values (.csv).

            • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

          • EncodingType (string) --

            The type of encoding you are using:

            • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

            • PLAIN doesn't use encoding at all. Values are stored as they are.

            • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

          • DictPageSizeLimit (integer) --

            The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

          • RowGroupLength (integer) --

            The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

            If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

          • DataPageSize (integer) --

            The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

          • ParquetVersion (string) --

            The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

          • EnableStatistics (boolean) --

            A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

          • IncludeOpForFullLoad (boolean) --

            A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

            For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

          • CdcInsertsOnly (boolean) --

            A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

            If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

          • TimestampColumnName (string) --

            A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

            DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

            For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

            For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

            The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

            When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

          • ParquetTimestampInMillisecond (boolean) --

            A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

            When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

            Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

          • CdcInsertsAndUpdates (boolean) --

            A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

            For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

        • DmsTransferSettings (dict) --

          The settings in JSON format for the DMS transfer type of source endpoint.

          Possible settings include the following:

          • ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.

          • BucketName - The name of the S3 bucket to use.

          • CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.

          Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string

          JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

          • ServiceAccessRoleArn (string) --

            The IAM role that has permission to access the Amazon S3 bucket.

          • BucketName (string) --

            The name of the S3 bucket to use.

        • MongoDbSettings (dict) --

          The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.

          • Username (string) --

            The user name you use to access the MongoDB source endpoint.

          • Password (string) --

            The password for the user account you use to access the MongoDB source endpoint.

          • ServerName (string) --

            The name of the server on the MongoDB source endpoint.

          • Port (integer) --

            The port value for the MongoDB source endpoint.

          • DatabaseName (string) --

            The database name on the MongoDB source endpoint.

          • AuthType (string) --

            The authentication type you use to access the MongoDB source endpoint.

            When when set to "no", user name and password parameters are not used and can be empty.

          • AuthMechanism (string) --

            The authentication mechanism you use to access the MongoDB source endpoint.

            For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

          • NestingLevel (string) --

            Specifies either document or table mode.

            Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

          • ExtractDocId (string) --

            Specifies the document ID. Use this setting when NestingLevel is set to "none".

            Default value is "false".

          • DocsToInvestigate (string) --

            Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

            Must be a positive value greater than 0. Default value is 1000.

          • AuthSource (string) --

            The MongoDB database name. This setting isn't used when AuthType is set to "no".

            The default is "admin".

          • KmsKeyId (string) --

            The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

        • KinesisSettings (dict) --

          The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.

          • StreamArn (string) --

            The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

          • MessageFormat (string) --

            The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

          • IncludeTransactionDetails (boolean) --

            Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

          • IncludePartitionValue (boolean) --

            Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

          • PartitionIncludeSchemaTable (boolean) --

            Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

          • IncludeTableAlterOperations (boolean) --

            Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

          • IncludeControlDetails (boolean) --

            Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

        • KafkaSettings (dict) --

          The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.

          • Broker (string) --

            The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

          • Topic (string) --

            The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

          • MessageFormat (string) --

            The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

          • IncludeTransactionDetails (boolean) --

            Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

          • IncludePartitionValue (boolean) --

            Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

          • PartitionIncludeSchemaTable (boolean) --

            Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

          • IncludeTableAlterOperations (boolean) --

            Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

          • IncludeControlDetails (boolean) --

            Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

        • ElasticsearchSettings (dict) --

          The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) used by service to access the IAM role.

          • EndpointUri (string) --

            The endpoint for the Elasticsearch cluster.

          • FullLoadErrorPercentage (integer) --

            The maximum percentage of records that can fail to be written before a full load operation stops.

          • ErrorRetryDuration (integer) --

            The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

        • NeptuneSettings (dict) --

          The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

          • S3BucketName (string) --

            The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

          • S3BucketFolder (string) --

            A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

          • ErrorRetryDuration (integer) --

            The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

          • MaxFileSize (integer) --

            The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

          • MaxRetryCount (integer) --

            The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

          • IamAuthEnabled (boolean) --

            If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

        • RedshiftSettings (dict) --

          Settings for the Amazon Redshift endpoint.

          • AcceptAnyDate (boolean) --

            A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

            This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

          • AfterConnectScript (string) --

            Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

          • BucketFolder (string) --

            The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

          • BucketName (string) --

            The name of the S3 bucket you want to use

          • ConnectionTimeout (integer) --

            A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

          • DatabaseName (string) --

            The name of the Amazon Redshift data warehouse (service) that you are working with.

          • DateFormat (string) --

            The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

            If your date and time values use formats different from each other, set this to auto.

          • EmptyAsNull (boolean) --

            A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

          • EncryptionMode (string) --

            The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

          • FileTransferUploadStreams (integer) --

            The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

          • LoadTimeout (integer) --

            The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

          • MaxFileSize (integer) --

            The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

          • Password (string) --

            The password for the user named in the username property.

          • Port (integer) --

            The port number for Amazon Redshift. The default value is 5439.

          • RemoveQuotes (boolean) --

            A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

          • ReplaceInvalidChars (string) --

            A list of characters that you want to replace. Use with ReplaceChars.

          • ReplaceChars (string) --

            A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

          • ServerName (string) --

            The name of the Amazon Redshift cluster you are using.

          • ServiceAccessRoleArn (string) --

            The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

          • ServerSideEncryptionKmsKeyId (string) --

            The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

          • TimeFormat (string) --

            The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

            If your date and time values use formats different from each other, set this parameter to auto.

          • TrimBlanks (boolean) --

            A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

          • TruncateColumns (boolean) --

            A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

          • Username (string) --

            An Amazon Redshift user name for a registered user.

          • WriteBufferSize (integer) --

            The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

        • PostgreSQLSettings (dict) --

          The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • Port (integer) --

            Endpoint TCP port.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

        • MySQLSettings (dict) --

          The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • Port (integer) --

            Endpoint TCP port.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

        • OracleSettings (dict) --

          The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.

          • AsmPassword (string) --

            For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

          • AsmServer (string) --

            For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

          • AsmUser (string) --

            For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • Port (integer) --

            Endpoint TCP port.

          • SecurityDbEncryption (string) --

            For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

          • SecurityDbEncryptionName (string) --

            For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

        • SybaseSettings (dict) --

          The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • Port (integer) --

            Endpoint TCP port.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

        • MicrosoftSQLServerSettings (dict) --

          The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.

          • Port (integer) --

            Endpoint TCP port.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

        • IBMDb2Settings (dict) --

          The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.

          • DatabaseName (string) --

            Database name for the endpoint.

          • Password (string) --

            Endpoint connection password.

          • Port (integer) --

            Endpoint TCP port.

          • ServerName (string) --

            Fully qualified domain name of the endpoint.

          • Username (string) --

            Endpoint connection user name.

ModifyEndpoint (updated) Link ¶
Changes (request, response)
Request
{'IBMDb2Settings': {'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'ServerName': 'string',
                    'Username': 'string'},
 'KafkaSettings': {'IncludeControlDetails': 'boolean',
                   'IncludePartitionValue': 'boolean',
                   'IncludeTableAlterOperations': 'boolean',
                   'IncludeTransactionDetails': 'boolean',
                   'MessageFormat': 'json | json-unformatted',
                   'PartitionIncludeSchemaTable': 'boolean'},
 'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                'Password': 'string',
                                'Port': 'integer',
                                'ServerName': 'string',
                                'Username': 'string'},
 'MySQLSettings': {'DatabaseName': 'string',
                   'Password': 'string',
                   'Port': 'integer',
                   'ServerName': 'string',
                   'Username': 'string'},
 'OracleSettings': {'AsmPassword': 'string',
                    'AsmServer': 'string',
                    'AsmUser': 'string',
                    'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'SecurityDbEncryption': 'string',
                    'SecurityDbEncryptionName': 'string',
                    'ServerName': 'string',
                    'Username': 'string'},
 'PostgreSQLSettings': {'DatabaseName': 'string',
                        'Password': 'string',
                        'Port': 'integer',
                        'ServerName': 'string',
                        'Username': 'string'},
 'SybaseSettings': {'DatabaseName': 'string',
                    'Password': 'string',
                    'Port': 'integer',
                    'ServerName': 'string',
                    'Username': 'string'}}
Response
{'Endpoint': {'IBMDb2Settings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'KafkaSettings': {'IncludeControlDetails': 'boolean',
                                'IncludePartitionValue': 'boolean',
                                'IncludeTableAlterOperations': 'boolean',
                                'IncludeTransactionDetails': 'boolean',
                                'MessageFormat': 'json | json-unformatted',
                                'PartitionIncludeSchemaTable': 'boolean'},
              'MicrosoftSQLServerSettings': {'DatabaseName': 'string',
                                             'Password': 'string',
                                             'Port': 'integer',
                                             'ServerName': 'string',
                                             'Username': 'string'},
              'MySQLSettings': {'DatabaseName': 'string',
                                'Password': 'string',
                                'Port': 'integer',
                                'ServerName': 'string',
                                'Username': 'string'},
              'OracleSettings': {'AsmPassword': 'string',
                                 'AsmServer': 'string',
                                 'AsmUser': 'string',
                                 'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'SecurityDbEncryption': 'string',
                                 'SecurityDbEncryptionName': 'string',
                                 'ServerName': 'string',
                                 'Username': 'string'},
              'PostgreSQLSettings': {'DatabaseName': 'string',
                                     'Password': 'string',
                                     'Port': 'integer',
                                     'ServerName': 'string',
                                     'Username': 'string'},
              'SybaseSettings': {'DatabaseName': 'string',
                                 'Password': 'string',
                                 'Port': 'integer',
                                 'ServerName': 'string',
                                 'Username': 'string'}}}

Modifies the specified endpoint.

See also: AWS API Documentation

Request Syntax

client.modify_endpoint(
    EndpointArn='string',
    EndpointIdentifier='string',
    EndpointType='source'|'target',
    EngineName='string',
    Username='string',
    Password='string',
    ServerName='string',
    Port=123,
    DatabaseName='string',
    ExtraConnectionAttributes='string',
    CertificateArn='string',
    SslMode='none'|'require'|'verify-ca'|'verify-full',
    ServiceAccessRoleArn='string',
    ExternalTableDefinition='string',
    DynamoDbSettings={
        'ServiceAccessRoleArn': 'string'
    },
    S3Settings={
        'ServiceAccessRoleArn': 'string',
        'ExternalTableDefinition': 'string',
        'CsvRowDelimiter': 'string',
        'CsvDelimiter': 'string',
        'BucketFolder': 'string',
        'BucketName': 'string',
        'CompressionType': 'none'|'gzip',
        'EncryptionMode': 'sse-s3'|'sse-kms',
        'ServerSideEncryptionKmsKeyId': 'string',
        'DataFormat': 'csv'|'parquet',
        'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
        'DictPageSizeLimit': 123,
        'RowGroupLength': 123,
        'DataPageSize': 123,
        'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
        'EnableStatistics': True|False,
        'IncludeOpForFullLoad': True|False,
        'CdcInsertsOnly': True|False,
        'TimestampColumnName': 'string',
        'ParquetTimestampInMillisecond': True|False,
        'CdcInsertsAndUpdates': True|False
    },
    DmsTransferSettings={
        'ServiceAccessRoleArn': 'string',
        'BucketName': 'string'
    },
    MongoDbSettings={
        'Username': 'string',
        'Password': 'string',
        'ServerName': 'string',
        'Port': 123,
        'DatabaseName': 'string',
        'AuthType': 'no'|'password',
        'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
        'NestingLevel': 'none'|'one',
        'ExtractDocId': 'string',
        'DocsToInvestigate': 'string',
        'AuthSource': 'string',
        'KmsKeyId': 'string'
    },
    KinesisSettings={
        'StreamArn': 'string',
        'MessageFormat': 'json'|'json-unformatted',
        'ServiceAccessRoleArn': 'string',
        'IncludeTransactionDetails': True|False,
        'IncludePartitionValue': True|False,
        'PartitionIncludeSchemaTable': True|False,
        'IncludeTableAlterOperations': True|False,
        'IncludeControlDetails': True|False
    },
    KafkaSettings={
        'Broker': 'string',
        'Topic': 'string',
        'MessageFormat': 'json'|'json-unformatted',
        'IncludeTransactionDetails': True|False,
        'IncludePartitionValue': True|False,
        'PartitionIncludeSchemaTable': True|False,
        'IncludeTableAlterOperations': True|False,
        'IncludeControlDetails': True|False
    },
    ElasticsearchSettings={
        'ServiceAccessRoleArn': 'string',
        'EndpointUri': 'string',
        'FullLoadErrorPercentage': 123,
        'ErrorRetryDuration': 123
    },
    NeptuneSettings={
        'ServiceAccessRoleArn': 'string',
        'S3BucketName': 'string',
        'S3BucketFolder': 'string',
        'ErrorRetryDuration': 123,
        'MaxFileSize': 123,
        'MaxRetryCount': 123,
        'IamAuthEnabled': True|False
    },
    RedshiftSettings={
        'AcceptAnyDate': True|False,
        'AfterConnectScript': 'string',
        'BucketFolder': 'string',
        'BucketName': 'string',
        'ConnectionTimeout': 123,
        'DatabaseName': 'string',
        'DateFormat': 'string',
        'EmptyAsNull': True|False,
        'EncryptionMode': 'sse-s3'|'sse-kms',
        'FileTransferUploadStreams': 123,
        'LoadTimeout': 123,
        'MaxFileSize': 123,
        'Password': 'string',
        'Port': 123,
        'RemoveQuotes': True|False,
        'ReplaceInvalidChars': 'string',
        'ReplaceChars': 'string',
        'ServerName': 'string',
        'ServiceAccessRoleArn': 'string',
        'ServerSideEncryptionKmsKeyId': 'string',
        'TimeFormat': 'string',
        'TrimBlanks': True|False,
        'TruncateColumns': True|False,
        'Username': 'string',
        'WriteBufferSize': 123
    },
    PostgreSQLSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    MySQLSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    OracleSettings={
        'AsmPassword': 'string',
        'AsmServer': 'string',
        'AsmUser': 'string',
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'SecurityDbEncryption': 'string',
        'SecurityDbEncryptionName': 'string',
        'ServerName': 'string',
        'Username': 'string'
    },
    SybaseSettings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    },
    MicrosoftSQLServerSettings={
        'Port': 123,
        'DatabaseName': 'string',
        'Password': 'string',
        'ServerName': 'string',
        'Username': 'string'
    },
    IBMDb2Settings={
        'DatabaseName': 'string',
        'Password': 'string',
        'Port': 123,
        'ServerName': 'string',
        'Username': 'string'
    }
)
type EndpointArn:

string

param EndpointArn:

[REQUIRED]

The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

type EndpointIdentifier:

string

param EndpointIdentifier:

The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

type EndpointType:

string

param EndpointType:

The type of endpoint. Valid values are source and target.

type EngineName:

string

param EngineName:

The type of engine for the endpoint. Valid values, depending on the EndpointType, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

type Username:

string

param Username:

The user name to be used to login to the endpoint database.

type Password:

string

param Password:

The password to be used to login to the endpoint database.

type ServerName:

string

param ServerName:

The name of the server where the endpoint database resides.

type Port:

integer

param Port:

The port used by the endpoint database.

type DatabaseName:

string

param DatabaseName:

The name of the endpoint database.

type ExtraConnectionAttributes:

string

param ExtraConnectionAttributes:

Additional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument.

type CertificateArn:

string

param CertificateArn:

The Amazon Resource Name (ARN) of the certificate used for SSL connection.

type SslMode:

string

param SslMode:

The SSL mode used to connect to the endpoint. The default value is none.

type ServiceAccessRoleArn:

string

param ServiceAccessRoleArn:

The Amazon Resource Name (ARN) for the service access role you want to use to modify the endpoint.

type ExternalTableDefinition:

string

param ExternalTableDefinition:

The external table definition.

type DynamoDbSettings:

dict

param DynamoDbSettings:

Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) used by the service access IAM role.

type S3Settings:

dict

param S3Settings:

Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) used by the service access IAM role.

  • ExternalTableDefinition (string) --

    The external table definition.

  • CsvRowDelimiter (string) --

    The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

  • CsvDelimiter (string) --

    The delimiter used to separate columns in the source files. The default is a comma.

  • BucketFolder (string) --

    An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

  • BucketName (string) --

    The name of the S3 bucket.

  • CompressionType (string) --

    An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

  • EncryptionMode (string) --

    The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

    • s3:CreateBucket

    • s3:ListBucket

    • s3:DeleteBucket

    • s3:GetBucketLocation

    • s3:GetObject

    • s3:PutObject

    • s3:DeleteObject

    • s3:GetObjectVersion

    • s3:GetBucketPolicy

    • s3:PutBucketPolicy

    • s3:DeleteBucketPolicy

  • ServerSideEncryptionKmsKeyId (string) --

    If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

    Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

  • DataFormat (string) --

    The format of the data that you want to use for output. You can choose one of the following:

    • csv : This is a row-based file format with comma-separated values (.csv).

    • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

  • EncodingType (string) --

    The type of encoding you are using:

    • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

    • PLAIN doesn't use encoding at all. Values are stored as they are.

    • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

  • DictPageSizeLimit (integer) --

    The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

  • RowGroupLength (integer) --

    The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

    If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

  • DataPageSize (integer) --

    The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

  • ParquetVersion (string) --

    The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

  • EnableStatistics (boolean) --

    A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

  • IncludeOpForFullLoad (boolean) --

    A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

    For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

  • CdcInsertsOnly (boolean) --

    A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

    If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

  • TimestampColumnName (string) --

    A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

    DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

    For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

    For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

    The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

    When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

  • ParquetTimestampInMillisecond (boolean) --

    A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

    When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

    Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

  • CdcInsertsAndUpdates (boolean) --

    A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

    For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

type DmsTransferSettings:

dict

param DmsTransferSettings:

The settings in JSON format for the DMS transfer type of source endpoint.

Attributes include the following:

  • serviceAccessRoleArn - The AWS Identity and Access Management (IAM) role that has permission to access the Amazon S3 bucket.

  • BucketName - The name of the S3 bucket to use.

  • compressionType - An optional parameter to use GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed.

Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string

JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

  • ServiceAccessRoleArn (string) --

    The IAM role that has permission to access the Amazon S3 bucket.

  • BucketName (string) --

    The name of the S3 bucket to use.

type MongoDbSettings:

dict

param MongoDbSettings:

Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • Username (string) --

    The user name you use to access the MongoDB source endpoint.

  • Password (string) --

    The password for the user account you use to access the MongoDB source endpoint.

  • ServerName (string) --

    The name of the server on the MongoDB source endpoint.

  • Port (integer) --

    The port value for the MongoDB source endpoint.

  • DatabaseName (string) --

    The database name on the MongoDB source endpoint.

  • AuthType (string) --

    The authentication type you use to access the MongoDB source endpoint.

    When when set to "no", user name and password parameters are not used and can be empty.

  • AuthMechanism (string) --

    The authentication mechanism you use to access the MongoDB source endpoint.

    For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

  • NestingLevel (string) --

    Specifies either document or table mode.

    Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

  • ExtractDocId (string) --

    Specifies the document ID. Use this setting when NestingLevel is set to "none".

    Default value is "false".

  • DocsToInvestigate (string) --

    Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

    Must be a positive value greater than 0. Default value is 1000.

  • AuthSource (string) --

    The MongoDB database name. This setting isn't used when AuthType is set to "no".

    The default is "admin".

  • KmsKeyId (string) --

    The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

type KinesisSettings:

dict

param KinesisSettings:

Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • StreamArn (string) --

    The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

  • MessageFormat (string) --

    The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

  • IncludeTransactionDetails (boolean) --

    Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

  • IncludePartitionValue (boolean) --

    Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

  • PartitionIncludeSchemaTable (boolean) --

    Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

  • IncludeTableAlterOperations (boolean) --

    Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

  • IncludeControlDetails (boolean) --

    Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

type KafkaSettings:

dict

param KafkaSettings:

Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.

  • Broker (string) --

    The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

  • Topic (string) --

    The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

  • MessageFormat (string) --

    The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

  • IncludeTransactionDetails (boolean) --

    Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

  • IncludePartitionValue (boolean) --

    Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

  • PartitionIncludeSchemaTable (boolean) --

    Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

  • IncludeTableAlterOperations (boolean) --

    Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

  • IncludeControlDetails (boolean) --

    Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

type ElasticsearchSettings:

dict

param ElasticsearchSettings:

Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) used by service to access the IAM role.

  • EndpointUri (string) -- [REQUIRED]

    The endpoint for the Elasticsearch cluster.

  • FullLoadErrorPercentage (integer) --

    The maximum percentage of records that can fail to be written before a full load operation stops.

  • ErrorRetryDuration (integer) --

    The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

type NeptuneSettings:

dict

param NeptuneSettings:

Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying Endpoint Settings for Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

  • S3BucketName (string) -- [REQUIRED]

    The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

  • S3BucketFolder (string) -- [REQUIRED]

    A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

  • ErrorRetryDuration (integer) --

    The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

  • MaxFileSize (integer) --

    The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

  • MaxRetryCount (integer) --

    The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

  • IamAuthEnabled (boolean) --

    If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

type RedshiftSettings:

dict

param RedshiftSettings:

Provides information that defines an Amazon Redshift endpoint.

  • AcceptAnyDate (boolean) --

    A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

    This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

  • AfterConnectScript (string) --

    Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

  • BucketFolder (string) --

    The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

  • BucketName (string) --

    The name of the S3 bucket you want to use

  • ConnectionTimeout (integer) --

    A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

  • DatabaseName (string) --

    The name of the Amazon Redshift data warehouse (service) that you are working with.

  • DateFormat (string) --

    The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

    If your date and time values use formats different from each other, set this to auto.

  • EmptyAsNull (boolean) --

    A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

  • EncryptionMode (string) --

    The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

  • FileTransferUploadStreams (integer) --

    The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

  • LoadTimeout (integer) --

    The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

  • MaxFileSize (integer) --

    The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

  • Password (string) --

    The password for the user named in the username property.

  • Port (integer) --

    The port number for Amazon Redshift. The default value is 5439.

  • RemoveQuotes (boolean) --

    A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

  • ReplaceInvalidChars (string) --

    A list of characters that you want to replace. Use with ReplaceChars.

  • ReplaceChars (string) --

    A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

  • ServerName (string) --

    The name of the Amazon Redshift cluster you are using.

  • ServiceAccessRoleArn (string) --

    The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

  • ServerSideEncryptionKmsKeyId (string) --

    The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

  • TimeFormat (string) --

    The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

    If your date and time values use formats different from each other, set this parameter to auto.

  • TrimBlanks (boolean) --

    A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

  • TruncateColumns (boolean) --

    A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

  • Username (string) --

    An Amazon Redshift user name for a registered user.

  • WriteBufferSize (integer) --

    The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

type PostgreSQLSettings:

dict

param PostgreSQLSettings:

Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type MySQLSettings:

dict

param MySQLSettings:

Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type OracleSettings:

dict

param OracleSettings:

Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • AsmPassword (string) --

    For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • AsmServer (string) --

    For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • AsmUser (string) --

    For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • SecurityDbEncryption (string) --

    For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • SecurityDbEncryptionName (string) --

    For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type SybaseSettings:

dict

param SybaseSettings:

Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type MicrosoftSQLServerSettings:

dict

param MicrosoftSQLServerSettings:

Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide.

  • Port (integer) --

    Endpoint TCP port.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

type IBMDb2Settings:

dict

param IBMDb2Settings:

Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide.

  • DatabaseName (string) --

    Database name for the endpoint.

  • Password (string) --

    Endpoint connection password.

  • Port (integer) --

    Endpoint TCP port.

  • ServerName (string) --

    Fully qualified domain name of the endpoint.

  • Username (string) --

    Endpoint connection user name.

rtype:

dict

returns:

Response Syntax

{
    'Endpoint': {
        'EndpointIdentifier': 'string',
        'EndpointType': 'source'|'target',
        'EngineName': 'string',
        'EngineDisplayName': 'string',
        'Username': 'string',
        'ServerName': 'string',
        'Port': 123,
        'DatabaseName': 'string',
        'ExtraConnectionAttributes': 'string',
        'Status': 'string',
        'KmsKeyId': 'string',
        'EndpointArn': 'string',
        'CertificateArn': 'string',
        'SslMode': 'none'|'require'|'verify-ca'|'verify-full',
        'ServiceAccessRoleArn': 'string',
        'ExternalTableDefinition': 'string',
        'ExternalId': 'string',
        'DynamoDbSettings': {
            'ServiceAccessRoleArn': 'string'
        },
        'S3Settings': {
            'ServiceAccessRoleArn': 'string',
            'ExternalTableDefinition': 'string',
            'CsvRowDelimiter': 'string',
            'CsvDelimiter': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'CompressionType': 'none'|'gzip',
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'ServerSideEncryptionKmsKeyId': 'string',
            'DataFormat': 'csv'|'parquet',
            'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary',
            'DictPageSizeLimit': 123,
            'RowGroupLength': 123,
            'DataPageSize': 123,
            'ParquetVersion': 'parquet-1-0'|'parquet-2-0',
            'EnableStatistics': True|False,
            'IncludeOpForFullLoad': True|False,
            'CdcInsertsOnly': True|False,
            'TimestampColumnName': 'string',
            'ParquetTimestampInMillisecond': True|False,
            'CdcInsertsAndUpdates': True|False
        },
        'DmsTransferSettings': {
            'ServiceAccessRoleArn': 'string',
            'BucketName': 'string'
        },
        'MongoDbSettings': {
            'Username': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Port': 123,
            'DatabaseName': 'string',
            'AuthType': 'no'|'password',
            'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1',
            'NestingLevel': 'none'|'one',
            'ExtractDocId': 'string',
            'DocsToInvestigate': 'string',
            'AuthSource': 'string',
            'KmsKeyId': 'string'
        },
        'KinesisSettings': {
            'StreamArn': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'ServiceAccessRoleArn': 'string',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'KafkaSettings': {
            'Broker': 'string',
            'Topic': 'string',
            'MessageFormat': 'json'|'json-unformatted',
            'IncludeTransactionDetails': True|False,
            'IncludePartitionValue': True|False,
            'PartitionIncludeSchemaTable': True|False,
            'IncludeTableAlterOperations': True|False,
            'IncludeControlDetails': True|False
        },
        'ElasticsearchSettings': {
            'ServiceAccessRoleArn': 'string',
            'EndpointUri': 'string',
            'FullLoadErrorPercentage': 123,
            'ErrorRetryDuration': 123
        },
        'NeptuneSettings': {
            'ServiceAccessRoleArn': 'string',
            'S3BucketName': 'string',
            'S3BucketFolder': 'string',
            'ErrorRetryDuration': 123,
            'MaxFileSize': 123,
            'MaxRetryCount': 123,
            'IamAuthEnabled': True|False
        },
        'RedshiftSettings': {
            'AcceptAnyDate': True|False,
            'AfterConnectScript': 'string',
            'BucketFolder': 'string',
            'BucketName': 'string',
            'ConnectionTimeout': 123,
            'DatabaseName': 'string',
            'DateFormat': 'string',
            'EmptyAsNull': True|False,
            'EncryptionMode': 'sse-s3'|'sse-kms',
            'FileTransferUploadStreams': 123,
            'LoadTimeout': 123,
            'MaxFileSize': 123,
            'Password': 'string',
            'Port': 123,
            'RemoveQuotes': True|False,
            'ReplaceInvalidChars': 'string',
            'ReplaceChars': 'string',
            'ServerName': 'string',
            'ServiceAccessRoleArn': 'string',
            'ServerSideEncryptionKmsKeyId': 'string',
            'TimeFormat': 'string',
            'TrimBlanks': True|False,
            'TruncateColumns': True|False,
            'Username': 'string',
            'WriteBufferSize': 123
        },
        'PostgreSQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MySQLSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'OracleSettings': {
            'AsmPassword': 'string',
            'AsmServer': 'string',
            'AsmUser': 'string',
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'SecurityDbEncryption': 'string',
            'SecurityDbEncryptionName': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'SybaseSettings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        },
        'MicrosoftSQLServerSettings': {
            'Port': 123,
            'DatabaseName': 'string',
            'Password': 'string',
            'ServerName': 'string',
            'Username': 'string'
        },
        'IBMDb2Settings': {
            'DatabaseName': 'string',
            'Password': 'string',
            'Port': 123,
            'ServerName': 'string',
            'Username': 'string'
        }
    }
}

Response Structure

  • (dict) --

    • Endpoint (dict) --

      The modified endpoint.

      • EndpointIdentifier (string) --

        The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.

      • EndpointType (string) --

        The type of endpoint. Valid values are source and target.

      • EngineName (string) --

        The database engine name. Valid values, depending on the EndpointType, include "mysql", "oracle", "postgres", "mariadb", "aurora", "aurora-postgresql", "redshift", "s3", "db2", "azuredb", "sybase", "dynamodb", "mongodb", "kinesis", "kafka", "elasticsearch", "documentdb", "sqlserver", and "neptune".

      • EngineDisplayName (string) --

        The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."

      • Username (string) --

        The user name used to connect to the endpoint.

      • ServerName (string) --

        The name of the server at the endpoint.

      • Port (integer) --

        The port value used to access the endpoint.

      • DatabaseName (string) --

        The name of the database at the endpoint.

      • ExtraConnectionAttributes (string) --

        Additional connection attributes used to connect to the endpoint.

      • Status (string) --

        The status of the endpoint.

      • KmsKeyId (string) --

        An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.

        If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.

        AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • EndpointArn (string) --

        The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.

      • CertificateArn (string) --

        The Amazon Resource Name (ARN) used for SSL connection to the endpoint.

      • SslMode (string) --

        The SSL mode used to connect to the endpoint. The default value is none.

      • ServiceAccessRoleArn (string) --

        The Amazon Resource Name (ARN) used by the service access IAM role.

      • ExternalTableDefinition (string) --

        The external table definition.

      • ExternalId (string) --

        Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.

      • DynamoDbSettings (dict) --

        The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

      • S3Settings (dict) --

        The settings for the S3 target endpoint. For more information, see the S3Settings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by the service access IAM role.

        • ExternalTableDefinition (string) --

          The external table definition.

        • CsvRowDelimiter (string) --

          The delimiter used to separate rows in the source files. The default is a carriage return ( \n).

        • CsvDelimiter (string) --

          The delimiter used to separate columns in the source files. The default is a comma.

        • BucketFolder (string) --

          An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path bucketFolder/schema_name/table_name/. If this parameter isn't specified, then the path used is schema_name/table_name/.

        • BucketName (string) --

          The name of the S3 bucket.

        • CompressionType (string) --

          An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:

          • s3:CreateBucket

          • s3:ListBucket

          • s3:DeleteBucket

          • s3:GetBucketLocation

          • s3:GetObject

          • s3:PutObject

          • s3:DeleteObject

          • s3:GetObjectVersion

          • s3:GetBucketPolicy

          • s3:PutBucketPolicy

          • s3:DeleteBucketPolicy

        • ServerSideEncryptionKmsKeyId (string) --

          If you are using SSE_KMS for the EncryptionMode, provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.

          Here is a CLI example: aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value

        • DataFormat (string) --

          The format of the data that you want to use for output. You can choose one of the following:

          • csv : This is a row-based file format with comma-separated values (.csv).

          • parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.

        • EncodingType (string) --

          The type of encoding you are using:

          • RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.

          • PLAIN doesn't use encoding at all. Values are stored as they are.

          • PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.

        • DictPageSizeLimit (integer) --

          The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN. This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.

        • RowGroupLength (integer) --

          The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.

          If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).

        • DataPageSize (integer) --

          The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.

        • ParquetVersion (string) --

          The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0.

        • EnableStatistics (boolean) --

          A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL, DISTINCT, MAX, and MIN values. This parameter defaults to true. This value is used for .parquet file format only.

        • IncludeOpForFullLoad (boolean) --

          A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.

          For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y, the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.

        • CdcInsertsOnly (boolean) --

          A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.

          If CdcInsertsOnly is set to true or y, only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false, every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

        • TimestampColumnName (string) --

          A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.

          DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.

          For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.

          For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.

          The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS. By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.

          When the AddColumnName parameter is set to true, DMS also includes a name for the timestamp column that you set with TimestampColumnName.

        • ParquetTimestampInMillisecond (boolean) --

          A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.

          When ParquetTimestampInMillisecond is set to true or y, AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.

          Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.

        • CdcInsertsAndUpdates (boolean) --

          A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false, but when CdcInsertsAndUpdates is set to true``or ``y, INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.

          For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true, the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false, CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide..

      • DmsTransferSettings (dict) --

        The settings in JSON format for the DMS transfer type of source endpoint.

        Possible settings include the following:

        • ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName - The name of the S3 bucket to use.

        • CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.

        Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string

        JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }

        • ServiceAccessRoleArn (string) --

          The IAM role that has permission to access the Amazon S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket to use.

      • MongoDbSettings (dict) --

        The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.

        • Username (string) --

          The user name you use to access the MongoDB source endpoint.

        • Password (string) --

          The password for the user account you use to access the MongoDB source endpoint.

        • ServerName (string) --

          The name of the server on the MongoDB source endpoint.

        • Port (integer) --

          The port value for the MongoDB source endpoint.

        • DatabaseName (string) --

          The database name on the MongoDB source endpoint.

        • AuthType (string) --

          The authentication type you use to access the MongoDB source endpoint.

          When when set to "no", user name and password parameters are not used and can be empty.

        • AuthMechanism (string) --

          The authentication mechanism you use to access the MongoDB source endpoint.

          For the default value, in MongoDB version 2.x, "default" is "mongodb_cr". For MongoDB version 3.x or later, "default" is "scram_sha_1". This setting isn't used when AuthType is set to "no".

        • NestingLevel (string) --

          Specifies either document or table mode.

          Default value is "none". Specify "none" to use document mode. Specify "one" to use table mode.

        • ExtractDocId (string) --

          Specifies the document ID. Use this setting when NestingLevel is set to "none".

          Default value is "false".

        • DocsToInvestigate (string) --

          Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one".

          Must be a positive value greater than 0. Default value is 1000.

        • AuthSource (string) --

          The MongoDB database name. This setting isn't used when AuthType is set to "no".

          The default is "admin".

        • KmsKeyId (string) --

          The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

      • KinesisSettings (dict) --

        The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.

        • StreamArn (string) --

          The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is False.

      • KafkaSettings (dict) --

        The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.

        • Broker (string) --

          The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form broker-hostname-or-ip:port ``. For example, ``"ec2-12-345-678-901.compute-1.amazonaws.com:2345".

        • Topic (string) --

          The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.

        • MessageFormat (string) --

          The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

        • IncludeTransactionDetails (boolean) --

          Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is False.

        • IncludePartitionValue (boolean) --

          Shows the partition value within the Kafka message output, unless the partition type is schema-table-type. The default is False.

        • PartitionIncludeSchemaTable (boolean) --

          Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is False.

        • IncludeTableAlterOperations (boolean) --

          Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is False.

        • IncludeControlDetails (boolean) --

          Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is False.

      • ElasticsearchSettings (dict) --

        The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) used by service to access the IAM role.

        • EndpointUri (string) --

          The endpoint for the Elasticsearch cluster.

        • FullLoadErrorPercentage (integer) --

          The maximum percentage of records that can fail to be written before a full load operation stops.

        • ErrorRetryDuration (integer) --

          The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.

      • NeptuneSettings (dict) --

        The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.

        • S3BucketName (string) --

          The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.

        • S3BucketFolder (string) --

          A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName

        • ErrorRetryDuration (integer) --

          The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.

        • MaxFileSize (integer) --

          The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.

        • MaxRetryCount (integer) --

          The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.

        • IamAuthEnabled (boolean) --

          If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true. Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn. The default is false.

      • RedshiftSettings (dict) --

        Settings for the Amazon Redshift endpoint.

        • AcceptAnyDate (boolean) --

          A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).

          This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.

        • AfterConnectScript (string) --

          Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.

        • BucketFolder (string) --

          The location where the comma-separated value (.csv) files are stored before being uploaded to the S3 bucket.

        • BucketName (string) --

          The name of the S3 bucket you want to use

        • ConnectionTimeout (integer) --

          A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.

        • DatabaseName (string) --

          The name of the Amazon Redshift data warehouse (service) that you are working with.

        • DateFormat (string) --

          The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.

          If your date and time values use formats different from each other, set this to auto.

        • EmptyAsNull (boolean) --

          A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false.

        • EncryptionMode (string) --

          The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS. To use SSE_S3, create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"

        • FileTransferUploadStreams (integer) --

          The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.

        • LoadTimeout (integer) --

          The amount of time to wait (in milliseconds) before timing out, beginning from when you begin loading.

        • MaxFileSize (integer) --

          The maximum size (in KB) of any .csv file used to transfer data to Amazon Redshift. This accepts a value from 1 through 1,048,576. It defaults to 32,768 KB (32 MB).

        • Password (string) --

          The password for the user named in the username property.

        • Port (integer) --

          The port number for Amazon Redshift. The default value is 5439.

        • RemoveQuotes (boolean) --

          A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false.

        • ReplaceInvalidChars (string) --

          A list of characters that you want to replace. Use with ReplaceChars.

        • ReplaceChars (string) --

          A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars, substituting the specified characters instead. The default is "?".

        • ServerName (string) --

          The name of the Amazon Redshift cluster you are using.

        • ServiceAccessRoleArn (string) --

          The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.

        • ServerSideEncryptionKmsKeyId (string) --

          The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.

        • TimeFormat (string) --

          The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string', 'epochsecs', or 'epochmillisecs'. It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.

          If your date and time values use formats different from each other, set this parameter to auto.

        • TrimBlanks (boolean) --

          A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false.

        • TruncateColumns (boolean) --

          A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false.

        • Username (string) --

          An Amazon Redshift user name for a registered user.

        • WriteBufferSize (integer) --

          The size of the write buffer to use in rows. Valid values range from 1 through 2,048. The default is 1,024. Use this setting to tune performance.

      • PostgreSQLSettings (dict) --

        The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MySQLSettings (dict) --

        The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • OracleSettings (dict) --

        The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.

        • AsmPassword (string) --

          For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the asm_user_password value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmServer (string) --

          For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • AsmUser (string) --

          For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • SecurityDbEncryption (string) --

          For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the TDE_Password part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • SecurityDbEncryptionName (string) --

          For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName, see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • SybaseSettings (dict) --

        The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • MicrosoftSQLServerSettings (dict) --

        The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.

        • Port (integer) --

          Endpoint TCP port.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.

      • IBMDb2Settings (dict) --

        The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.

        • DatabaseName (string) --

          Database name for the endpoint.

        • Password (string) --

          Endpoint connection password.

        • Port (integer) --

          Endpoint TCP port.

        • ServerName (string) --

          Fully qualified domain name of the endpoint.

        • Username (string) --

          Endpoint connection user name.