2023/05/30 - AWS Glue - 21 updated api methods
Changes Added Runtime parameter to allow selection of Ray Runtime
{'DevEndpoints': {'WorkerType': {'Z.2X'}}}
Returns a list of resource metadata for a given list of development endpoint names. After calling the ListDevEndpoints operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_dev_endpoints( DevEndpointNames=[ 'string', ] )
list
[REQUIRED]
The list of DevEndpoint names, which might be the names returned from the ListDevEndpoint operation.
(string) --
dict
Response Syntax
{ 'DevEndpoints': [ { 'EndpointName': 'string', 'RoleArn': 'string', 'SecurityGroupIds': [ 'string', ], 'SubnetId': 'string', 'YarnEndpointAddress': 'string', 'PrivateAddress': 'string', 'ZeppelinRemoteSparkInterpreterPort': 123, 'PublicAddress': 'string', 'Status': 'string', 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'GlueVersion': 'string', 'NumberOfWorkers': 123, 'NumberOfNodes': 123, 'AvailabilityZone': 'string', 'VpcId': 'string', 'ExtraPythonLibsS3Path': 'string', 'ExtraJarsS3Path': 'string', 'FailureReason': 'string', 'LastUpdateStatus': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'LastModifiedTimestamp': datetime(2015, 1, 1), 'PublicKey': 'string', 'PublicKeys': [ 'string', ], 'SecurityConfiguration': 'string', 'Arguments': { 'string': 'string' } }, ], 'DevEndpointsNotFound': [ 'string', ] }
Response Structure
(dict) --
DevEndpoints (list) --
A list of DevEndpoint definitions.
(dict) --
A development endpoint where a developer can remotely debug extract, transform, and load (ETL) scripts.
EndpointName (string) --
The name of the DevEndpoint .
RoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used in this DevEndpoint .
SecurityGroupIds (list) --
A list of security group identifiers used in this DevEndpoint .
(string) --
SubnetId (string) --
The subnet ID for this DevEndpoint .
YarnEndpointAddress (string) --
The YARN endpoint address used by this DevEndpoint .
PrivateAddress (string) --
A private IP address to access the DevEndpoint within a VPC if the DevEndpoint is created within one. The PrivateAddress field is present only when you create the DevEndpoint within your VPC.
ZeppelinRemoteSparkInterpreterPort (integer) --
The Apache Zeppelin port for the remote Apache Spark interpreter.
PublicAddress (string) --
The public IP address used by this DevEndpoint . The PublicAddress field is present only when you create a non-virtual private cloud (VPC) DevEndpoint .
Status (string) --
The current status of this DevEndpoint .
WorkerType (string) --
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X , and 149 for G.2X .
NumberOfNodes (integer) --
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint .
AvailabilityZone (string) --
The Amazon Web Services Availability Zone where this DevEndpoint is located.
VpcId (string) --
The ID of the virtual private cloud (VPC) used by this DevEndpoint .
ExtraPythonLibsS3Path (string) --
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint . Multiple values must be complete paths separated by a comma.
Note
You can only use pure Python libraries with a DevEndpoint . Libraries that rely on C extensions, such as the pandas Python data analysis library, are not currently supported.
ExtraJarsS3Path (string) --
The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint .
Note
You can only use pure Java/Scala libraries with a DevEndpoint .
FailureReason (string) --
The reason for a current failure in this DevEndpoint .
LastUpdateStatus (string) --
The status of the last update.
CreatedTimestamp (datetime) --
The point in time at which this DevEndpoint was created.
LastModifiedTimestamp (datetime) --
The point in time at which this DevEndpoint was last modified.
PublicKey (string) --
The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.
PublicKeys (list) --
A list of public keys to be used by the DevEndpoints for authentication. Using this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
Note
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API operation with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.
(string) --
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this DevEndpoint .
Arguments (dict) --
A map of arguments used to configure the DevEndpoint .
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
(string) --
(string) --
DevEndpointsNotFound (list) --
A list of DevEndpoints not found.
(string) --
{'Jobs': {'Command': {'Runtime': 'string'}, 'WorkerType': {'Z.2X'}}}
Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_jobs( JobNames=[ 'string', ] )
list
[REQUIRED]
A list of job names, which might be the names returned from the ListJobs operation.
(string) --
dict
Response Syntax
{ 'Jobs': [ { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string', 'Runtime': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'DynamicTransform': { 'Name': 'string', 'TransformName': 'string', 'Inputs': [ 'string', ], 'Parameters': [ { 'Name': 'string', 'Type': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'ValidationRule': 'string', 'ValidationMessage': 'string', 'Value': [ 'string', ], 'ListType': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'IsOptional': True|False }, ], 'FunctionName': 'string', 'Path': 'string', 'Version': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'EvaluateDataQuality': { 'Name': 'string', 'Inputs': [ 'string', ], 'Ruleset': 'string', 'Output': 'PrimaryInput'|'EvaluationResults', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } }, 'S3CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalHudiOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3HudiDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Compression': 'gzip'|'lzo'|'uncompressed'|'snappy', 'PartitionKeys': [ [ 'string', ], ], 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'DirectJDBCSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'ConnectionName': 'string', 'ConnectionType': 'sqlserver'|'mysql'|'oracle'|'postgresql'|'redshift', 'RedshiftTmpDir': 'string' }, 'S3CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalDeltaOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3DeltaDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'uncompressed'|'snappy', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'AmazonRedshiftSource': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] } }, 'AmazonRedshiftTarget': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] }, 'Inputs': [ 'string', ] }, 'EvaluateDataQualityMultiFrame': { 'Name': 'string', 'Inputs': [ 'string', ], 'AdditionalDataSources': { 'string': 'string' }, 'Ruleset': 'string', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'AdditionalOptions': { 'string': 'string' }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } } } }, 'ExecutionClass': 'FLEX'|'STANDARD', 'SourceControlDetails': { 'Provider': 'GITHUB'|'AWS_CODE_COMMIT', 'Repository': 'string', 'Owner': 'string', 'Branch': 'string', 'Folder': 'string', 'LastCommitId': 'string', 'AuthStrategy': 'PERSONAL_ACCESS_TOKEN'|'AWS_SECRETS_MANAGER', 'AuthToken': 'string' } }, ], 'JobsNotFound': [ 'string', ] } **Response Structure** :: # This section is too large to render. # Please see the AWS API Documentation linked below. `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/BatchGetJobs>`_
{'Workflows': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}, 'LastRun': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}}}}
Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_workflows( Names=[ 'string', ], IncludeGraph=True|False )
list
[REQUIRED]
A list of workflow names, which may be the names returned from the ListWorkflows operation.
(string) --
boolean
Specifies whether to include a graph when returning the workflow resource metadata.
dict
Response Syntax
{ 'Workflows': [ { 'Name': 'string', 'Description': 'string', 'DefaultRunProperties': { 'string': 'string' }, 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'LastRun': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'MaxConcurrentRuns': 123, 'BlueprintDetails': { 'BlueprintName': 'string', 'RunId': 'string' } }, ], 'MissingWorkflows': [ 'string', ] }
Response Structure
(dict) --
Workflows (list) --
A list of workflow resource metadata.
(dict) --
A workflow is a collection of multiple dependent Glue jobs and crawlers that are run to complete a complex ETL task. A workflow manages the execution and monitoring of all its jobs and crawlers.
Name (string) --
The name of the workflow.
Description (string) --
A description of the workflow.
DefaultRunProperties (dict) --
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
(string) --
(string) --
CreatedOn (datetime) --
The date and time when the workflow was created.
LastModifiedOn (datetime) --
The date and time when the workflow was last modified.
LastRun (dict) --
The information about the last execution of the workflow.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
MaxConcurrentRuns (integer) --
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails (dict) --
This structure indicates the details of the blueprint that this particular workflow is created from.
BlueprintName (string) --
The name of the blueprint.
RunId (string) --
The run ID for this blueprint.
MissingWorkflows (list) --
A list of names of workflows not found.
(string) --
{'WorkerType': {'Z.2X'}}
Creates a new development endpoint.
See also: AWS API Documentation
Request Syntax
client.create_dev_endpoint( EndpointName='string', RoleArn='string', SecurityGroupIds=[ 'string', ], SubnetId='string', PublicKey='string', PublicKeys=[ 'string', ], NumberOfNodes=123, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', GlueVersion='string', NumberOfWorkers=123, ExtraPythonLibsS3Path='string', ExtraJarsS3Path='string', SecurityConfiguration='string', Tags={ 'string': 'string' }, Arguments={ 'string': 'string' } )
string
[REQUIRED]
The name to be assigned to the new DevEndpoint .
string
[REQUIRED]
The IAM role for the DevEndpoint .
list
Security group IDs for the security groups to be used by the new DevEndpoint .
(string) --
string
The subnet ID for the new DevEndpoint to use.
string
The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.
list
A list of public keys to be used by the development endpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
Note
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.
(string) --
integer
The number of Glue Data Processing Units (DPUs) to allocate to this DevEndpoint .
string
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.
string
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
integer
The number of workers of a defined workerType that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X , and 149 for G.2X .
string
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint . Multiple values must be complete paths separated by a comma.
Note
You can only use pure Python libraries with a DevEndpoint . Libraries that rely on C extensions, such as the pandas Python data analysis library, are not yet supported.
string
The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint .
string
The name of the SecurityConfiguration structure to be used with this DevEndpoint .
dict
The tags to use with this DevEndpoint. You may use tags to limit access to the DevEndpoint. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
(string) --
(string) --
dict
A map of arguments used to configure the DevEndpoint .
(string) --
(string) --
dict
Response Syntax
{ 'EndpointName': 'string', 'Status': 'string', 'SecurityGroupIds': [ 'string', ], 'SubnetId': 'string', 'RoleArn': 'string', 'YarnEndpointAddress': 'string', 'ZeppelinRemoteSparkInterpreterPort': 123, 'NumberOfNodes': 123, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'GlueVersion': 'string', 'NumberOfWorkers': 123, 'AvailabilityZone': 'string', 'VpcId': 'string', 'ExtraPythonLibsS3Path': 'string', 'ExtraJarsS3Path': 'string', 'FailureReason': 'string', 'SecurityConfiguration': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'Arguments': { 'string': 'string' } }
Response Structure
(dict) --
EndpointName (string) --
The name assigned to the new DevEndpoint .
Status (string) --
The current status of the new DevEndpoint .
SecurityGroupIds (list) --
The security groups assigned to the new DevEndpoint .
(string) --
SubnetId (string) --
The subnet ID assigned to the new DevEndpoint .
RoleArn (string) --
The Amazon Resource Name (ARN) of the role assigned to the new DevEndpoint .
YarnEndpointAddress (string) --
The address of the YARN endpoint used by this DevEndpoint .
ZeppelinRemoteSparkInterpreterPort (integer) --
The Apache Zeppelin port for the remote Apache Spark interpreter.
NumberOfNodes (integer) --
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint.
WorkerType (string) --
The type of predefined worker that is allocated to the development endpoint. May be a value of Standard, G.1X, or G.2X.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated to the development endpoint.
AvailabilityZone (string) --
The Amazon Web Services Availability Zone where this DevEndpoint is located.
VpcId (string) --
The ID of the virtual private cloud (VPC) used by this DevEndpoint .
ExtraPythonLibsS3Path (string) --
The paths to one or more Python libraries in an S3 bucket that will be loaded in your DevEndpoint .
ExtraJarsS3Path (string) --
Path to one or more Java .jar files in an S3 bucket that will be loaded in your DevEndpoint .
FailureReason (string) --
The reason for a current failure in this DevEndpoint .
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure being used with this DevEndpoint .
CreatedTimestamp (datetime) --
The point in time at which this DevEndpoint was created.
Arguments (dict) --
The map of arguments used to configure this DevEndpoint .
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
(string) --
(string) --
{'Command': {'Runtime': 'string'}, 'WorkerType': {'Z.2X'}}
Creates a new job definition.
See also: AWS API Documentation
Request Syntax
client.create_job( Name='string', Description='string', LogUri='string', Role='string', ExecutionProperty={ 'MaxConcurrentRuns': 123 }, Command={ 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string', 'Runtime': 'string' }, DefaultArguments={ 'string': 'string' }, NonOverridableArguments={ 'string': 'string' }, Connections={ 'Connections': [ 'string', ] }, MaxRetries=123, AllocatedCapacity=123, Timeout=123, MaxCapacity=123.0, SecurityConfiguration='string', Tags={ 'string': 'string' }, NotificationProperty={ 'NotifyDelayAfter': 123 }, GlueVersion='string', NumberOfWorkers=123, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', CodeGenConfigurationNodes={ 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'DynamicTransform': { 'Name': 'string', 'TransformName': 'string', 'Inputs': [ 'string', ], 'Parameters': [ { 'Name': 'string', 'Type': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'ValidationRule': 'string', 'ValidationMessage': 'string', 'Value': [ 'string', ], 'ListType': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'IsOptional': True|False }, ], 'FunctionName': 'string', 'Path': 'string', 'Version': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'EvaluateDataQuality': { 'Name': 'string', 'Inputs': [ 'string', ], 'Ruleset': 'string', 'Output': 'PrimaryInput'|'EvaluationResults', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } }, 'S3CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalHudiOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3HudiDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Compression': 'gzip'|'lzo'|'uncompressed'|'snappy', 'PartitionKeys': [ [ 'string', ], ], 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'DirectJDBCSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'ConnectionName': 'string', 'ConnectionType': 'sqlserver'|'mysql'|'oracle'|'postgresql'|'redshift', 'RedshiftTmpDir': 'string' }, 'S3CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalDeltaOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3DeltaDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'uncompressed'|'snappy', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'AmazonRedshiftSource': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] } }, 'AmazonRedshiftTarget': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] }, 'Inputs': [ 'string', ] }, 'EvaluateDataQualityMultiFrame': { 'Name': 'string', 'Inputs': [ 'string', ], 'AdditionalDataSources': { 'string': 'string' }, 'Ruleset': 'string', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'AdditionalOptions': { 'string': 'string' }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } } } }, ExecutionClass='FLEX'|'STANDARD', SourceControlDetails={ 'Provider': 'GITHUB'|'AWS_CODE_COMMIT', 'Repository': 'string', 'Owner': 'string', 'Branch': 'string', 'Folder': 'string', 'LastCommitId': 'string', 'AuthStrategy': 'PERSONAL_ACCESS_TOKEN'|'AWS_SECRETS_MANAGER', 'AuthToken': 'string' } ) **Parameters** :: # This section is too large to render. # Please see the AWS API Documentation linked below. `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateJob>`_
dict
Response Syntax
{ 'Name': 'string' }
Response Structure
(dict) --
Name (string) --
The unique name that was provided for this job definition.
{'WorkerType': {'Z.2X'}}
Creates an Glue machine learning transform. This operation creates the transform and all the necessary parameters to train it.
Call this operation as the first step in the process of using a machine learning transform (such as the FindMatches transform) for deduplicating data. You can provide an optional Description , in addition to the parameters that you want to use for your algorithm.
You must also specify certain parameters for the tasks that Glue runs on your behalf as part of learning from your data and creating a high-quality machine learning transform. These parameters include Role , and optionally, AllocatedCapacity , Timeout , and MaxRetries . For more information, see Jobs.
See also: AWS API Documentation
Request Syntax
client.create_ml_transform( Name='string', Description='string', InputRecordTables=[ { 'DatabaseName': 'string', 'TableName': 'string', 'CatalogId': 'string', 'ConnectionName': 'string', 'AdditionalOptions': { 'string': 'string' } }, ], Parameters={ 'TransformType': 'FIND_MATCHES', 'FindMatchesParameters': { 'PrimaryKeyColumnName': 'string', 'PrecisionRecallTradeoff': 123.0, 'AccuracyCostTradeoff': 123.0, 'EnforceProvidedLabels': True|False } }, Role='string', GlueVersion='string', MaxCapacity=123.0, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', NumberOfWorkers=123, Timeout=123, MaxRetries=123, Tags={ 'string': 'string' }, TransformEncryption={ 'MlUserDataEncryption': { 'MlUserDataEncryptionMode': 'DISABLED'|'SSE-KMS', 'KmsKeyId': 'string' }, 'TaskRunSecurityConfigurationName': 'string' } )
string
[REQUIRED]
The unique name that you give the transform when you create it.
string
A description of the machine learning transform that is being defined. The default is an empty string.
list
[REQUIRED]
A list of Glue table definitions used by the transform.
(dict) --
The database and table in the Glue Data Catalog that is used for input or output data.
DatabaseName (string) -- [REQUIRED]
A database name in the Glue Data Catalog.
TableName (string) -- [REQUIRED]
A table name in the Glue Data Catalog.
CatalogId (string) --
A unique identifier for the Glue Data Catalog.
ConnectionName (string) --
The name of the connection to the Glue Data Catalog.
AdditionalOptions (dict) --
Additional options for the table. Currently there are two keys supported:
pushDownPredicate : to filter on partitions without having to list and read all the files in your dataset.
catalogPartitionPredicate : to use server-side partition pruning using partition indexes in the Glue Data Catalog.
(string) --
(string) --
dict
[REQUIRED]
The algorithmic parameters that are specific to the transform type used. Conditionally dependent on the transform type.
TransformType (string) -- [REQUIRED]
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters (dict) --
The parameters for the find matches algorithm.
PrimaryKeyColumnName (string) --
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
PrecisionRecallTradeoff (float) --
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
AccuracyCostTradeoff (float) --
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5 means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost, which results in a less accurate FindMatches transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
EnforceProvidedLabels (boolean) --
The value to switch on or off to force the output to match the provided labels from users. If the value is True , the find matches transform forces the output to match the provided labels. The results override the normal conflation results. If the value is False , the find matches transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
string
[REQUIRED]
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
string
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
float
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType .
If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.
If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
MaxCapacity and NumberOfWorkers must both be at least 1.
When the WorkerType field is set to a value other than Standard , the MaxCapacity field is set automatically and becomes read-only.
When the WorkerType field is set to a value other than Standard , the MaxCapacity field is set automatically and becomes read-only.
string
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType .
If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.
If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
MaxCapacity and NumberOfWorkers must both be at least 1.
integer
The number of workers of a defined workerType that are allocated when this task runs.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
integer
The timeout of the task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
integer
The maximum number of times to retry a task for this transform after a task run fails.
dict
The tags to use with this machine learning transform. You may use tags to limit access to the machine learning transform. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
(string) --
(string) --
dict
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
MlUserDataEncryption (dict) --
An MLUserDataEncryption object containing the encryption mode and customer-provided KMS key ID.
MlUserDataEncryptionMode (string) -- [REQUIRED]
The encryption mode applied to user data. Valid values are:
DISABLED: encryption is disabled
SSEKMS: use of server-side encryption with Key Management Service (SSE-KMS) for user data stored in Amazon S3.
KmsKeyId (string) --
The ID for the customer-provided KMS key.
TaskRunSecurityConfigurationName (string) --
The name of the security configuration.
dict
Response Syntax
{ 'TransformId': 'string' }
Response Structure
(dict) --
TransformId (string) --
A unique identifier that is generated for the transform.
{'WorkerType': {'Z.2X'}}
Creates a new session.
See also: AWS API Documentation
Request Syntax
client.create_session( Id='string', Description='string', Role='string', Command={ 'Name': 'string', 'PythonVersion': 'string' }, Timeout=123, IdleTimeout=123, DefaultArguments={ 'string': 'string' }, Connections={ 'Connections': [ 'string', ] }, MaxCapacity=123.0, NumberOfWorkers=123, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', SecurityConfiguration='string', GlueVersion='string', Tags={ 'string': 'string' }, RequestOrigin='string' )
string
[REQUIRED]
The ID of the session request.
string
The description of the session.
string
[REQUIRED]
The IAM Role ARN
dict
[REQUIRED]
The SessionCommand that runs the job.
Name (string) --
Specifies the name of the SessionCommand. Can be 'glueetl' or 'gluestreaming'.
PythonVersion (string) --
Specifies the Python version. The Python version indicates the version supported for jobs of type Spark.
integer
The number of minutes before session times out. Default for Spark ETL jobs is 48 hours (2880 minutes), the maximum session lifetime for this job type. Consult the documentation for other job types.
integer
The number of minutes when idle before session times out. Default for Spark ETL jobs is value of Timeout. Consult the documentation for other job types.
dict
A map array of key-value pairs. Max is 75 pairs.
(string) --
(string) --
dict
The number of connections to use for the session.
Connections (list) --
A list of connections used by the job.
(string) --
float
The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.
integer
The number of workers of a defined WorkerType to use for the session.
string
The type of predefined worker that is allocated to use for the session. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
string
The name of the SecurityConfiguration structure to be used with the session
string
The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.
dict
The map of key value pairs (tags) belonging to the session.
(string) --
(string) --
string
The origin of the request.
dict
Response Syntax
{ 'Session': { 'Id': 'string', 'CreatedOn': datetime(2015, 1, 1), 'Status': 'PROVISIONING'|'READY'|'FAILED'|'TIMEOUT'|'STOPPING'|'STOPPED', 'ErrorMessage': 'string', 'Description': 'string', 'Role': 'string', 'Command': { 'Name': 'string', 'PythonVersion': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'Progress': 123.0, 'MaxCapacity': 123.0, 'SecurityConfiguration': 'string', 'GlueVersion': 'string' } }
Response Structure
(dict) --
Session (dict) --
Returns the session object in the response.
Id (string) --
The ID of the session.
CreatedOn (datetime) --
The time and date when the session was created.
Status (string) --
The session status.
ErrorMessage (string) --
The error message displayed during the session.
Description (string) --
The description of the session.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role associated with the Session.
Command (dict) --
The command object.See SessionCommand.
Name (string) --
Specifies the name of the SessionCommand. Can be 'glueetl' or 'gluestreaming'.
PythonVersion (string) --
Specifies the Python version. The Python version indicates the version supported for jobs of type Spark.
DefaultArguments (dict) --
A map array of key-value pairs. Max is 75 pairs.
(string) --
(string) --
Connections (dict) --
The number of connections used for the session.
Connections (list) --
A list of connections used by the job.
(string) --
Progress (float) --
The code execution progress of the session.
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with the session.
GlueVersion (string) --
The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.
{'DevEndpoint': {'WorkerType': {'Z.2X'}}}
Retrieves information about a specified development endpoint.
Note
When you create a development endpoint in a virtual private cloud (VPC), Glue returns only a private IP address, and the public IP address field is not populated. When you create a non-VPC development endpoint, Glue returns only a public IP address.
See also: AWS API Documentation
Request Syntax
client.get_dev_endpoint( EndpointName='string' )
string
[REQUIRED]
Name of the DevEndpoint to retrieve information for.
dict
Response Syntax
{ 'DevEndpoint': { 'EndpointName': 'string', 'RoleArn': 'string', 'SecurityGroupIds': [ 'string', ], 'SubnetId': 'string', 'YarnEndpointAddress': 'string', 'PrivateAddress': 'string', 'ZeppelinRemoteSparkInterpreterPort': 123, 'PublicAddress': 'string', 'Status': 'string', 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'GlueVersion': 'string', 'NumberOfWorkers': 123, 'NumberOfNodes': 123, 'AvailabilityZone': 'string', 'VpcId': 'string', 'ExtraPythonLibsS3Path': 'string', 'ExtraJarsS3Path': 'string', 'FailureReason': 'string', 'LastUpdateStatus': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'LastModifiedTimestamp': datetime(2015, 1, 1), 'PublicKey': 'string', 'PublicKeys': [ 'string', ], 'SecurityConfiguration': 'string', 'Arguments': { 'string': 'string' } } }
Response Structure
(dict) --
DevEndpoint (dict) --
A DevEndpoint definition.
EndpointName (string) --
The name of the DevEndpoint .
RoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used in this DevEndpoint .
SecurityGroupIds (list) --
A list of security group identifiers used in this DevEndpoint .
(string) --
SubnetId (string) --
The subnet ID for this DevEndpoint .
YarnEndpointAddress (string) --
The YARN endpoint address used by this DevEndpoint .
PrivateAddress (string) --
A private IP address to access the DevEndpoint within a VPC if the DevEndpoint is created within one. The PrivateAddress field is present only when you create the DevEndpoint within your VPC.
ZeppelinRemoteSparkInterpreterPort (integer) --
The Apache Zeppelin port for the remote Apache Spark interpreter.
PublicAddress (string) --
The public IP address used by this DevEndpoint . The PublicAddress field is present only when you create a non-virtual private cloud (VPC) DevEndpoint .
Status (string) --
The current status of this DevEndpoint .
WorkerType (string) --
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X , and 149 for G.2X .
NumberOfNodes (integer) --
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint .
AvailabilityZone (string) --
The Amazon Web Services Availability Zone where this DevEndpoint is located.
VpcId (string) --
The ID of the virtual private cloud (VPC) used by this DevEndpoint .
ExtraPythonLibsS3Path (string) --
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint . Multiple values must be complete paths separated by a comma.
Note
You can only use pure Python libraries with a DevEndpoint . Libraries that rely on C extensions, such as the pandas Python data analysis library, are not currently supported.
ExtraJarsS3Path (string) --
The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint .
Note
You can only use pure Java/Scala libraries with a DevEndpoint .
FailureReason (string) --
The reason for a current failure in this DevEndpoint .
LastUpdateStatus (string) --
The status of the last update.
CreatedTimestamp (datetime) --
The point in time at which this DevEndpoint was created.
LastModifiedTimestamp (datetime) --
The point in time at which this DevEndpoint was last modified.
PublicKey (string) --
The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.
PublicKeys (list) --
A list of public keys to be used by the DevEndpoints for authentication. Using this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
Note
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API operation with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.
(string) --
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this DevEndpoint .
Arguments (dict) --
A map of arguments used to configure the DevEndpoint .
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
(string) --
(string) --
{'DevEndpoints': {'WorkerType': {'Z.2X'}}}
Retrieves all the development endpoints in this Amazon Web Services account.
Note
When you create a development endpoint in a virtual private cloud (VPC), Glue returns only a private IP address and the public IP address field is not populated. When you create a non-VPC development endpoint, Glue returns only a public IP address.
See also: AWS API Documentation
Request Syntax
client.get_dev_endpoints( MaxResults=123, NextToken='string' )
integer
The maximum size of information to return.
string
A continuation token, if this is a continuation call.
dict
Response Syntax
{ 'DevEndpoints': [ { 'EndpointName': 'string', 'RoleArn': 'string', 'SecurityGroupIds': [ 'string', ], 'SubnetId': 'string', 'YarnEndpointAddress': 'string', 'PrivateAddress': 'string', 'ZeppelinRemoteSparkInterpreterPort': 123, 'PublicAddress': 'string', 'Status': 'string', 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'GlueVersion': 'string', 'NumberOfWorkers': 123, 'NumberOfNodes': 123, 'AvailabilityZone': 'string', 'VpcId': 'string', 'ExtraPythonLibsS3Path': 'string', 'ExtraJarsS3Path': 'string', 'FailureReason': 'string', 'LastUpdateStatus': 'string', 'CreatedTimestamp': datetime(2015, 1, 1), 'LastModifiedTimestamp': datetime(2015, 1, 1), 'PublicKey': 'string', 'PublicKeys': [ 'string', ], 'SecurityConfiguration': 'string', 'Arguments': { 'string': 'string' } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
DevEndpoints (list) --
A list of DevEndpoint definitions.
(dict) --
A development endpoint where a developer can remotely debug extract, transform, and load (ETL) scripts.
EndpointName (string) --
The name of the DevEndpoint .
RoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role used in this DevEndpoint .
SecurityGroupIds (list) --
A list of security group identifiers used in this DevEndpoint .
(string) --
SubnetId (string) --
The subnet ID for this DevEndpoint .
YarnEndpointAddress (string) --
The YARN endpoint address used by this DevEndpoint .
PrivateAddress (string) --
A private IP address to access the DevEndpoint within a VPC if the DevEndpoint is created within one. The PrivateAddress field is present only when you create the DevEndpoint within your VPC.
ZeppelinRemoteSparkInterpreterPort (integer) --
The Apache Zeppelin port for the remote Apache Spark interpreter.
PublicAddress (string) --
The public IP address used by this DevEndpoint . The PublicAddress field is present only when you create a non-virtual private cloud (VPC) DevEndpoint .
Status (string) --
The current status of this DevEndpoint .
WorkerType (string) --
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X WorkerType configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X , and 149 for G.2X .
NumberOfNodes (integer) --
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint .
AvailabilityZone (string) --
The Amazon Web Services Availability Zone where this DevEndpoint is located.
VpcId (string) --
The ID of the virtual private cloud (VPC) used by this DevEndpoint .
ExtraPythonLibsS3Path (string) --
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your DevEndpoint . Multiple values must be complete paths separated by a comma.
Note
You can only use pure Python libraries with a DevEndpoint . Libraries that rely on C extensions, such as the pandas Python data analysis library, are not currently supported.
ExtraJarsS3Path (string) --
The path to one or more Java .jar files in an S3 bucket that should be loaded in your DevEndpoint .
Note
You can only use pure Java/Scala libraries with a DevEndpoint .
FailureReason (string) --
The reason for a current failure in this DevEndpoint .
LastUpdateStatus (string) --
The status of the last update.
CreatedTimestamp (datetime) --
The point in time at which this DevEndpoint was created.
LastModifiedTimestamp (datetime) --
The point in time at which this DevEndpoint was last modified.
PublicKey (string) --
The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility because the recommended attribute to use is public keys.
PublicKeys (list) --
A list of public keys to be used by the DevEndpoints for authentication. Using this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
Note
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys. Call the UpdateDevEndpoint API operation with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.
(string) --
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this DevEndpoint .
Arguments (dict) --
A map of arguments used to configure the DevEndpoint .
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments parameter in the CreateDevEndpoint or UpdateDevEndpoint APIs. If no arguments are provided, the version defaults to Python 2.
(string) --
(string) --
NextToken (string) --
A continuation token, if not all DevEndpoint definitions have yet been returned.
{'Job': {'Command': {'Runtime': 'string'}, 'WorkerType': {'Z.2X'}}}
Retrieves an existing job definition.
See also: AWS API Documentation
Request Syntax
client.get_job( JobName='string' )
string
[REQUIRED]
The name of the job definition to retrieve.
dict
Response Syntax
{ 'Job': { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string', 'Runtime': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'DynamicTransform': { 'Name': 'string', 'TransformName': 'string', 'Inputs': [ 'string', ], 'Parameters': [ { 'Name': 'string', 'Type': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'ValidationRule': 'string', 'ValidationMessage': 'string', 'Value': [ 'string', ], 'ListType': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'IsOptional': True|False }, ], 'FunctionName': 'string', 'Path': 'string', 'Version': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'EvaluateDataQuality': { 'Name': 'string', 'Inputs': [ 'string', ], 'Ruleset': 'string', 'Output': 'PrimaryInput'|'EvaluationResults', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } }, 'S3CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalHudiOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3HudiDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Compression': 'gzip'|'lzo'|'uncompressed'|'snappy', 'PartitionKeys': [ [ 'string', ], ], 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'DirectJDBCSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'ConnectionName': 'string', 'ConnectionType': 'sqlserver'|'mysql'|'oracle'|'postgresql'|'redshift', 'RedshiftTmpDir': 'string' }, 'S3CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalDeltaOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3DeltaDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'uncompressed'|'snappy', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'AmazonRedshiftSource': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] } }, 'AmazonRedshiftTarget': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] }, 'Inputs': [ 'string', ] }, 'EvaluateDataQualityMultiFrame': { 'Name': 'string', 'Inputs': [ 'string', ], 'AdditionalDataSources': { 'string': 'string' }, 'Ruleset': 'string', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'AdditionalOptions': { 'string': 'string' }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } } } }, 'ExecutionClass': 'FLEX'|'STANDARD', 'SourceControlDetails': { 'Provider': 'GITHUB'|'AWS_CODE_COMMIT', 'Repository': 'string', 'Owner': 'string', 'Branch': 'string', 'Folder': 'string', 'LastCommitId': 'string', 'AuthStrategy': 'PERSONAL_ACCESS_TOKEN'|'AWS_SECRETS_MANAGER', 'AuthToken': 'string' } } } **Response Structure** :: # This section is too large to render. # Please see the AWS API Documentation linked below. `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetJob>`_
{'JobRun': {'WorkerType': {'Z.2X'}}}
Retrieves the metadata for a given job run.
See also: AWS API Documentation
Request Syntax
client.get_job_run( JobName='string', RunId='string', PredecessorsIncluded=True|False )
string
[REQUIRED]
Name of the job definition being run.
string
[REQUIRED]
The ID of the job run.
boolean
True if a list of predecessor runs should be returned.
dict
Response Syntax
{ 'JobRun': { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' } }
Response Structure
(dict) --
JobRun (dict) --
The requested job-run metadata.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
{'JobRuns': {'WorkerType': {'Z.2X'}}}
Retrieves metadata for all runs of a given job definition.
See also: AWS API Documentation
Request Syntax
client.get_job_runs( JobName='string', NextToken='string', MaxResults=123 )
string
[REQUIRED]
The name of the job definition for which to retrieve all job runs.
string
A continuation token, if this is a continuation call.
integer
The maximum size of the response.
dict
Response Syntax
{ 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
JobRuns (list) --
A list of job-run metadata objects.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
NextToken (string) --
A continuation token, if not all requested job runs have been returned.
{'Jobs': {'Command': {'Runtime': 'string'}, 'WorkerType': {'Z.2X'}}}
Retrieves all current job definitions.
See also: AWS API Documentation
Request Syntax
client.get_jobs( NextToken='string', MaxResults=123 )
string
A continuation token, if this is a continuation call.
integer
The maximum size of the response.
dict
Response Syntax
{ 'Jobs': [ { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string', 'Runtime': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'DynamicTransform': { 'Name': 'string', 'TransformName': 'string', 'Inputs': [ 'string', ], 'Parameters': [ { 'Name': 'string', 'Type': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'ValidationRule': 'string', 'ValidationMessage': 'string', 'Value': [ 'string', ], 'ListType': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'IsOptional': True|False }, ], 'FunctionName': 'string', 'Path': 'string', 'Version': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'EvaluateDataQuality': { 'Name': 'string', 'Inputs': [ 'string', ], 'Ruleset': 'string', 'Output': 'PrimaryInput'|'EvaluationResults', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } }, 'S3CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalHudiOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3HudiDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Compression': 'gzip'|'lzo'|'uncompressed'|'snappy', 'PartitionKeys': [ [ 'string', ], ], 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'DirectJDBCSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'ConnectionName': 'string', 'ConnectionType': 'sqlserver'|'mysql'|'oracle'|'postgresql'|'redshift', 'RedshiftTmpDir': 'string' }, 'S3CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalDeltaOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3DeltaDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'uncompressed'|'snappy', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'AmazonRedshiftSource': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] } }, 'AmazonRedshiftTarget': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] }, 'Inputs': [ 'string', ] }, 'EvaluateDataQualityMultiFrame': { 'Name': 'string', 'Inputs': [ 'string', ], 'AdditionalDataSources': { 'string': 'string' }, 'Ruleset': 'string', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'AdditionalOptions': { 'string': 'string' }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } } } }, 'ExecutionClass': 'FLEX'|'STANDARD', 'SourceControlDetails': { 'Provider': 'GITHUB'|'AWS_CODE_COMMIT', 'Repository': 'string', 'Owner': 'string', 'Branch': 'string', 'Folder': 'string', 'LastCommitId': 'string', 'AuthStrategy': 'PERSONAL_ACCESS_TOKEN'|'AWS_SECRETS_MANAGER', 'AuthToken': 'string' } }, ], 'NextToken': 'string' } **Response Structure** :: # This section is too large to render. # Please see the AWS API Documentation linked below. `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetJobs>`_
{'WorkerType': {'Z.2X'}}
Gets an Glue machine learning transform artifact and all its corresponding metadata. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by Glue. You can retrieve their metadata by calling GetMLTransform .
See also: AWS API Documentation
Request Syntax
client.get_ml_transform( TransformId='string' )
string
[REQUIRED]
The unique identifier of the transform, generated at the time that the transform was created.
dict
Response Syntax
{ 'TransformId': 'string', 'Name': 'string', 'Description': 'string', 'Status': 'NOT_READY'|'READY'|'DELETING', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'InputRecordTables': [ { 'DatabaseName': 'string', 'TableName': 'string', 'CatalogId': 'string', 'ConnectionName': 'string', 'AdditionalOptions': { 'string': 'string' } }, ], 'Parameters': { 'TransformType': 'FIND_MATCHES', 'FindMatchesParameters': { 'PrimaryKeyColumnName': 'string', 'PrecisionRecallTradeoff': 123.0, 'AccuracyCostTradeoff': 123.0, 'EnforceProvidedLabels': True|False } }, 'EvaluationMetrics': { 'TransformType': 'FIND_MATCHES', 'FindMatchesMetrics': { 'AreaUnderPRCurve': 123.0, 'Precision': 123.0, 'Recall': 123.0, 'F1': 123.0, 'ConfusionMatrix': { 'NumTruePositives': 123, 'NumFalsePositives': 123, 'NumTrueNegatives': 123, 'NumFalseNegatives': 123 }, 'ColumnImportances': [ { 'ColumnName': 'string', 'Importance': 123.0 }, ] } }, 'LabelCount': 123, 'Schema': [ { 'Name': 'string', 'DataType': 'string' }, ], 'Role': 'string', 'GlueVersion': 'string', 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'Timeout': 123, 'MaxRetries': 123, 'TransformEncryption': { 'MlUserDataEncryption': { 'MlUserDataEncryptionMode': 'DISABLED'|'SSE-KMS', 'KmsKeyId': 'string' }, 'TaskRunSecurityConfigurationName': 'string' } }
Response Structure
(dict) --
TransformId (string) --
The unique identifier of the transform, generated at the time that the transform was created.
Name (string) --
The unique name given to the transform when it was created.
Description (string) --
A description of the transform.
Status (string) --
The last known status of the transform (to indicate whether it can be used or not). One of "NOT_READY", "READY", or "DELETING".
CreatedOn (datetime) --
The date and time when the transform was created.
LastModifiedOn (datetime) --
The date and time when the transform was last modified.
InputRecordTables (list) --
A list of Glue table definitions used by the transform.
(dict) --
The database and table in the Glue Data Catalog that is used for input or output data.
DatabaseName (string) --
A database name in the Glue Data Catalog.
TableName (string) --
A table name in the Glue Data Catalog.
CatalogId (string) --
A unique identifier for the Glue Data Catalog.
ConnectionName (string) --
The name of the connection to the Glue Data Catalog.
AdditionalOptions (dict) --
Additional options for the table. Currently there are two keys supported:
pushDownPredicate : to filter on partitions without having to list and read all the files in your dataset.
catalogPartitionPredicate : to use server-side partition pruning using partition indexes in the Glue Data Catalog.
(string) --
(string) --
Parameters (dict) --
The configuration parameters that are specific to the algorithm used.
TransformType (string) --
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters (dict) --
The parameters for the find matches algorithm.
PrimaryKeyColumnName (string) --
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
PrecisionRecallTradeoff (float) --
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
AccuracyCostTradeoff (float) --
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5 means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost, which results in a less accurate FindMatches transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
EnforceProvidedLabels (boolean) --
The value to switch on or off to force the output to match the provided labels from users. If the value is True , the find matches transform forces the output to match the provided labels. The results override the normal conflation results. If the value is False , the find matches transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
EvaluationMetrics (dict) --
The latest evaluation metrics.
TransformType (string) --
The type of machine learning transform.
FindMatchesMetrics (dict) --
The evaluation metrics for the find matches algorithm.
AreaUnderPRCurve (float) --
The area under the precision/recall curve (AUPRC) is a single number measuring the overall quality of the transform, that is independent of the choice made for precision vs. recall. Higher values indicate that you have a more attractive precision vs. recall tradeoff.
For more information, see Precision and recall in Wikipedia.
Precision (float) --
The precision metric indicates when often your transform is correct when it predicts a match. Specifically, it measures how well the transform finds true positives from the total true positives possible.
For more information, see Precision and recall in Wikipedia.
Recall (float) --
The recall metric indicates that for an actual match, how often your transform predicts the match. Specifically, it measures how well the transform finds true positives from the total records in the source data.
For more information, see Precision and recall in Wikipedia.
F1 (float) --
The maximum F1 metric indicates the transform's accuracy between 0 and 1, where 1 is the best accuracy.
For more information, see F1 score in Wikipedia.
ConfusionMatrix (dict) --
The confusion matrix shows you what your transform is predicting accurately and what types of errors it is making.
For more information, see Confusion matrix in Wikipedia.
NumTruePositives (integer) --
The number of matches in the data that the transform correctly found, in the confusion matrix for your transform.
NumFalsePositives (integer) --
The number of nonmatches in the data that the transform incorrectly classified as a match, in the confusion matrix for your transform.
NumTrueNegatives (integer) --
The number of nonmatches in the data that the transform correctly rejected, in the confusion matrix for your transform.
NumFalseNegatives (integer) --
The number of matches in the data that the transform didn't find, in the confusion matrix for your transform.
ColumnImportances (list) --
A list of ColumnImportance structures containing column importance metrics, sorted in order of descending importance.
(dict) --
A structure containing the column name and column importance score for a column.
Column importance helps you understand how columns contribute to your model, by identifying which columns in your records are more important than others.
ColumnName (string) --
The name of a column.
Importance (float) --
The column importance score for the column, as a decimal.
LabelCount (integer) --
The number of labels available for this transform.
Schema (list) --
The Map<Column, Type> object that represents the schema that this transform accepts. Has an upper bound of 100 columns.
(dict) --
A key-value pair representing a column and data type that this transform can run against. The Schema parameter of the MLTransform may contain up to 100 of these structures.
Name (string) --
The name of the column.
DataType (string) --
The type of data in the column.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
GlueVersion (string) --
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType field is set to a value other than Standard , the MaxCapacity field is set automatically and becomes read-only.
WorkerType (string) --
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when this task runs.
Timeout (integer) --
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
MaxRetries (integer) --
The maximum number of times to retry a task for this transform after a task run fails.
TransformEncryption (dict) --
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
MlUserDataEncryption (dict) --
An MLUserDataEncryption object containing the encryption mode and customer-provided KMS key ID.
MlUserDataEncryptionMode (string) --
The encryption mode applied to user data. Valid values are:
DISABLED: encryption is disabled
SSEKMS: use of server-side encryption with Key Management Service (SSE-KMS) for user data stored in Amazon S3.
KmsKeyId (string) --
The ID for the customer-provided KMS key.
TaskRunSecurityConfigurationName (string) --
The name of the security configuration.
{'Transforms': {'WorkerType': {'Z.2X'}}}
Gets a sortable, filterable list of existing Glue machine learning transforms. Machine learning transforms are a special type of transform that use machine learning to learn the details of the transformation to be performed by learning from examples provided by humans. These transformations are then saved by Glue, and you can retrieve their metadata by calling GetMLTransforms .
See also: AWS API Documentation
Request Syntax
client.get_ml_transforms( NextToken='string', MaxResults=123, Filter={ 'Name': 'string', 'TransformType': 'FIND_MATCHES', 'Status': 'NOT_READY'|'READY'|'DELETING', 'GlueVersion': 'string', 'CreatedBefore': datetime(2015, 1, 1), 'CreatedAfter': datetime(2015, 1, 1), 'LastModifiedBefore': datetime(2015, 1, 1), 'LastModifiedAfter': datetime(2015, 1, 1), 'Schema': [ { 'Name': 'string', 'DataType': 'string' }, ] }, Sort={ 'Column': 'NAME'|'TRANSFORM_TYPE'|'STATUS'|'CREATED'|'LAST_MODIFIED', 'SortDirection': 'DESCENDING'|'ASCENDING' } )
string
A paginated token to offset the results.
integer
The maximum number of results to return.
dict
The filter transformation criteria.
Name (string) --
A unique transform name that is used to filter the machine learning transforms.
TransformType (string) --
The type of machine learning transform that is used to filter the machine learning transforms.
Status (string) --
Filters the list of machine learning transforms by the last known status of the transforms (to indicate whether a transform can be used or not). One of "NOT_READY", "READY", or "DELETING".
GlueVersion (string) --
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
CreatedBefore (datetime) --
The time and date before which the transforms were created.
CreatedAfter (datetime) --
The time and date after which the transforms were created.
LastModifiedBefore (datetime) --
Filter on transforms last modified before this date.
LastModifiedAfter (datetime) --
Filter on transforms last modified after this date.
Schema (list) --
Filters on datasets with a specific schema. The Map<Column, Type> object is an array of key-value pairs representing the schema this transform accepts, where Column is the name of a column, and Type is the type of the data such as an integer or string. Has an upper bound of 100 columns.
(dict) --
A key-value pair representing a column and data type that this transform can run against. The Schema parameter of the MLTransform may contain up to 100 of these structures.
Name (string) --
The name of the column.
DataType (string) --
The type of data in the column.
dict
The sorting criteria.
Column (string) -- [REQUIRED]
The column to be used in the sorting criteria that are associated with the machine learning transform.
SortDirection (string) -- [REQUIRED]
The sort direction to be used in the sorting criteria that are associated with the machine learning transform.
dict
Response Syntax
{ 'Transforms': [ { 'TransformId': 'string', 'Name': 'string', 'Description': 'string', 'Status': 'NOT_READY'|'READY'|'DELETING', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'InputRecordTables': [ { 'DatabaseName': 'string', 'TableName': 'string', 'CatalogId': 'string', 'ConnectionName': 'string', 'AdditionalOptions': { 'string': 'string' } }, ], 'Parameters': { 'TransformType': 'FIND_MATCHES', 'FindMatchesParameters': { 'PrimaryKeyColumnName': 'string', 'PrecisionRecallTradeoff': 123.0, 'AccuracyCostTradeoff': 123.0, 'EnforceProvidedLabels': True|False } }, 'EvaluationMetrics': { 'TransformType': 'FIND_MATCHES', 'FindMatchesMetrics': { 'AreaUnderPRCurve': 123.0, 'Precision': 123.0, 'Recall': 123.0, 'F1': 123.0, 'ConfusionMatrix': { 'NumTruePositives': 123, 'NumFalsePositives': 123, 'NumTrueNegatives': 123, 'NumFalseNegatives': 123 }, 'ColumnImportances': [ { 'ColumnName': 'string', 'Importance': 123.0 }, ] } }, 'LabelCount': 123, 'Schema': [ { 'Name': 'string', 'DataType': 'string' }, ], 'Role': 'string', 'GlueVersion': 'string', 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'Timeout': 123, 'MaxRetries': 123, 'TransformEncryption': { 'MlUserDataEncryption': { 'MlUserDataEncryptionMode': 'DISABLED'|'SSE-KMS', 'KmsKeyId': 'string' }, 'TaskRunSecurityConfigurationName': 'string' } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
Transforms (list) --
A list of machine learning transforms.
(dict) --
A structure for a machine learning transform.
TransformId (string) --
The unique transform ID that is generated for the machine learning transform. The ID is guaranteed to be unique and does not change.
Name (string) --
A user-defined name for the machine learning transform. Names are not guaranteed unique and can be changed at any time.
Description (string) --
A user-defined, long-form description text for the machine learning transform. Descriptions are not guaranteed to be unique and can be changed at any time.
Status (string) --
The current status of the machine learning transform.
CreatedOn (datetime) --
A timestamp. The time and date that this machine learning transform was created.
LastModifiedOn (datetime) --
A timestamp. The last point in time when this machine learning transform was modified.
InputRecordTables (list) --
A list of Glue table definitions used by the transform.
(dict) --
The database and table in the Glue Data Catalog that is used for input or output data.
DatabaseName (string) --
A database name in the Glue Data Catalog.
TableName (string) --
A table name in the Glue Data Catalog.
CatalogId (string) --
A unique identifier for the Glue Data Catalog.
ConnectionName (string) --
The name of the connection to the Glue Data Catalog.
AdditionalOptions (dict) --
Additional options for the table. Currently there are two keys supported:
pushDownPredicate : to filter on partitions without having to list and read all the files in your dataset.
catalogPartitionPredicate : to use server-side partition pruning using partition indexes in the Glue Data Catalog.
(string) --
(string) --
Parameters (dict) --
A TransformParameters object. You can use parameters to tune (customize) the behavior of the machine learning transform by specifying what data it learns from and your preference on various tradeoffs (such as precious vs. recall, or accuracy vs. cost).
TransformType (string) --
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters (dict) --
The parameters for the find matches algorithm.
PrimaryKeyColumnName (string) --
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
PrecisionRecallTradeoff (float) --
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
AccuracyCostTradeoff (float) --
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5 means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost, which results in a less accurate FindMatches transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
EnforceProvidedLabels (boolean) --
The value to switch on or off to force the output to match the provided labels from users. If the value is True , the find matches transform forces the output to match the provided labels. The results override the normal conflation results. If the value is False , the find matches transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
EvaluationMetrics (dict) --
An EvaluationMetrics object. Evaluation metrics provide an estimate of the quality of your machine learning transform.
TransformType (string) --
The type of machine learning transform.
FindMatchesMetrics (dict) --
The evaluation metrics for the find matches algorithm.
AreaUnderPRCurve (float) --
The area under the precision/recall curve (AUPRC) is a single number measuring the overall quality of the transform, that is independent of the choice made for precision vs. recall. Higher values indicate that you have a more attractive precision vs. recall tradeoff.
For more information, see Precision and recall in Wikipedia.
Precision (float) --
The precision metric indicates when often your transform is correct when it predicts a match. Specifically, it measures how well the transform finds true positives from the total true positives possible.
For more information, see Precision and recall in Wikipedia.
Recall (float) --
The recall metric indicates that for an actual match, how often your transform predicts the match. Specifically, it measures how well the transform finds true positives from the total records in the source data.
For more information, see Precision and recall in Wikipedia.
F1 (float) --
The maximum F1 metric indicates the transform's accuracy between 0 and 1, where 1 is the best accuracy.
For more information, see F1 score in Wikipedia.
ConfusionMatrix (dict) --
The confusion matrix shows you what your transform is predicting accurately and what types of errors it is making.
For more information, see Confusion matrix in Wikipedia.
NumTruePositives (integer) --
The number of matches in the data that the transform correctly found, in the confusion matrix for your transform.
NumFalsePositives (integer) --
The number of nonmatches in the data that the transform incorrectly classified as a match, in the confusion matrix for your transform.
NumTrueNegatives (integer) --
The number of nonmatches in the data that the transform correctly rejected, in the confusion matrix for your transform.
NumFalseNegatives (integer) --
The number of matches in the data that the transform didn't find, in the confusion matrix for your transform.
ColumnImportances (list) --
A list of ColumnImportance structures containing column importance metrics, sorted in order of descending importance.
(dict) --
A structure containing the column name and column importance score for a column.
Column importance helps you understand how columns contribute to your model, by identifying which columns in your records are more important than others.
ColumnName (string) --
The name of a column.
Importance (float) --
The column importance score for the column, as a decimal.
LabelCount (integer) --
A count identifier for the labeling files generated by Glue for this transform. As you create a better transform, you can iteratively download, label, and upload the labeling file.
Schema (list) --
A map of key-value pairs representing the columns and data types that this transform can run against. Has an upper bound of 100 columns.
(dict) --
A key-value pair representing a column and data type that this transform can run against. The Schema parameter of the MLTransform may contain up to 100 of these structures.
Name (string) --
The name of the column.
DataType (string) --
The type of data in the column.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
GlueVersion (string) --
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType .
If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.
If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
MaxCapacity and NumberOfWorkers must both be at least 1.
When the WorkerType field is set to a value other than Standard , the MaxCapacity field is set automatically and becomes read-only.
WorkerType (string) --
The type of predefined worker that is allocated when a task of this transform runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
MaxCapacity is a mutually exclusive option with NumberOfWorkers and WorkerType .
If either NumberOfWorkers or WorkerType is set, then MaxCapacity cannot be set.
If MaxCapacity is set then neither NumberOfWorkers or WorkerType can be set.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
MaxCapacity and NumberOfWorkers must both be at least 1.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a task of the transform runs.
If WorkerType is set, then NumberOfWorkers is required (and vice versa).
Timeout (integer) --
The timeout in minutes of the machine learning transform.
MaxRetries (integer) --
The maximum number of times to retry after an MLTaskRun of the machine learning transform fails.
TransformEncryption (dict) --
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
MlUserDataEncryption (dict) --
An MLUserDataEncryption object containing the encryption mode and customer-provided KMS key ID.
MlUserDataEncryptionMode (string) --
The encryption mode applied to user data. Valid values are:
DISABLED: encryption is disabled
SSEKMS: use of server-side encryption with Key Management Service (SSE-KMS) for user data stored in Amazon S3.
KmsKeyId (string) --
The ID for the customer-provided KMS key.
TaskRunSecurityConfigurationName (string) --
The name of the security configuration.
NextToken (string) --
A pagination token, if more results are available.
{'Workflow': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}, 'LastRun': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}}}}
Retrieves resource metadata for a workflow.
See also: AWS API Documentation
Request Syntax
client.get_workflow( Name='string', IncludeGraph=True|False )
string
[REQUIRED]
The name of the workflow to retrieve.
boolean
Specifies whether to include a graph when returning the workflow resource metadata.
dict
Response Syntax
{ 'Workflow': { 'Name': 'string', 'Description': 'string', 'DefaultRunProperties': { 'string': 'string' }, 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'LastRun': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'MaxConcurrentRuns': 123, 'BlueprintDetails': { 'BlueprintName': 'string', 'RunId': 'string' } } }
Response Structure
(dict) --
Workflow (dict) --
The resource metadata for the workflow.
Name (string) --
The name of the workflow.
Description (string) --
A description of the workflow.
DefaultRunProperties (dict) --
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
(string) --
(string) --
CreatedOn (datetime) --
The date and time when the workflow was created.
LastModifiedOn (datetime) --
The date and time when the workflow was last modified.
LastRun (dict) --
The information about the last execution of the workflow.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
MaxConcurrentRuns (integer) --
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails (dict) --
This structure indicates the details of the blueprint that this particular workflow is created from.
BlueprintName (string) --
The name of the blueprint.
RunId (string) --
The run ID for this blueprint.
{'Run': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}}}
Retrieves the metadata for a given workflow run.
See also: AWS API Documentation
Request Syntax
client.get_workflow_run( Name='string', RunId='string', IncludeGraph=True|False )
string
[REQUIRED]
Name of the workflow being run.
string
[REQUIRED]
The ID of the workflow run.
boolean
Specifies whether to include the workflow graph in response or not.
dict
Response Syntax
{ 'Run': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }
Response Structure
(dict) --
Run (dict) --
The requested workflow run metadata.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
{'Runs': {'Graph': {'Nodes': {'JobDetails': {'JobRuns': {'WorkerType': {'Z.2X'}}}}}}}
Retrieves metadata for all runs of a given workflow.
See also: AWS API Documentation
Request Syntax
client.get_workflow_runs( Name='string', IncludeGraph=True|False, NextToken='string', MaxResults=123 )
string
[REQUIRED]
Name of the workflow whose metadata of runs should be returned.
boolean
Specifies whether to include the workflow graph in response or not.
string
The maximum size of the response.
integer
The maximum number of workflow runs to be included in the response.
dict
Response Syntax
{ 'Runs': [ { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
Runs (list) --
A list of workflow run metadata objects.
(dict) --
A workflow run is an execution of a workflow providing all the runtime information.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
NextToken (string) --
A continuation token, if not all requested workflow runs have been returned.
{'WorkerType': {'Z.2X'}}
Starts a job run using a job definition.
See also: AWS API Documentation
Request Syntax
client.start_job_run( JobName='string', JobRunId='string', Arguments={ 'string': 'string' }, AllocatedCapacity=123, Timeout=123, MaxCapacity=123.0, SecurityConfiguration='string', NotificationProperty={ 'NotifyDelayAfter': 123 }, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', NumberOfWorkers=123, ExecutionClass='FLEX'|'STANDARD' )
string
[REQUIRED]
The name of the job definition to use.
string
The ID of a previous JobRun to retry.
dict
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
(string) --
(string) --
integer
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
integer
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
float
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
Do not set MaxCapacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
string
The name of the SecurityConfiguration structure to be used with this job run.
dict
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
string
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 DPU (8vCPU, 64 GB of m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the autoscaler.
integer
The number of workers of a defined workerType that are allocated when a job runs.
string
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
dict
Response Syntax
{ 'JobRunId': 'string' }
Response Structure
(dict) --
JobRunId (string) --
The ID assigned to this job run.
{'JobUpdate': {'Command': {'Runtime': 'string'}, 'WorkerType': {'Z.2X'}}}
Updates an existing job definition. The previous job definition is completely overwritten by this information.
See also: AWS API Documentation
Request Syntax
client.update_job( JobName='string', JobUpdate={ 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string', 'Runtime': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string', 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123, 'IncludeHeaders': True|False, 'AddRecordTimestamp': 'string', 'EmitConsumerLagMetrics': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'DynamicTransform': { 'Name': 'string', 'TransformName': 'string', 'Inputs': [ 'string', ], 'Parameters': [ { 'Name': 'string', 'Type': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'ValidationRule': 'string', 'ValidationMessage': 'string', 'Value': [ 'string', ], 'ListType': 'str'|'int'|'float'|'complex'|'bool'|'list'|'null', 'IsOptional': True|False }, ], 'FunctionName': 'string', 'Path': 'string', 'Version': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'EvaluateDataQuality': { 'Name': 'string', 'Inputs': [ 'string', ], 'Ruleset': 'string', 'Output': 'PrimaryInput'|'EvaluationResults', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } }, 'S3CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogHudiSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalHudiOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalHudiOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3HudiCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3HudiDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Compression': 'gzip'|'lzo'|'uncompressed'|'snappy', 'PartitionKeys': [ [ 'string', ], ], 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'DirectJDBCSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'ConnectionName': 'string', 'ConnectionType': 'sqlserver'|'mysql'|'oracle'|'postgresql'|'redshift', 'RedshiftTmpDir': 'string' }, 'S3CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogDeltaSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'AdditionalDeltaOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaSource': { 'Name': 'string', 'Paths': [ 'string', ], 'AdditionalDeltaOptions': { 'string': 'string' }, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3DeltaCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3DeltaDirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'uncompressed'|'snappy', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet'|'hudi'|'delta', 'AdditionalOptions': { 'string': 'string' }, 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'AmazonRedshiftSource': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] } }, 'AmazonRedshiftTarget': { 'Name': 'string', 'Data': { 'AccessType': 'string', 'SourceType': 'string', 'Connection': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Schema': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'Table': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogDatabase': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogTable': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'CatalogRedshiftSchema': 'string', 'CatalogRedshiftTable': 'string', 'TempDir': 'string', 'IamRole': { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, 'AdvancedOptions': [ { 'Key': 'string', 'Value': 'string' }, ], 'SampleQuery': 'string', 'PreAction': 'string', 'PostAction': 'string', 'Action': 'string', 'TablePrefix': 'string', 'Upsert': True|False, 'MergeAction': 'string', 'MergeWhenMatched': 'string', 'MergeWhenNotMatched': 'string', 'MergeClause': 'string', 'CrawlerConnection': 'string', 'TableSchema': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ], 'StagingTable': 'string', 'SelectedColumns': [ { 'Value': 'string', 'Label': 'string', 'Description': 'string' }, ] }, 'Inputs': [ 'string', ] }, 'EvaluateDataQualityMultiFrame': { 'Name': 'string', 'Inputs': [ 'string', ], 'AdditionalDataSources': { 'string': 'string' }, 'Ruleset': 'string', 'PublishingOptions': { 'EvaluationContext': 'string', 'ResultsS3Prefix': 'string', 'CloudWatchMetricsEnabled': True|False, 'ResultsPublishingEnabled': True|False }, 'AdditionalOptions': { 'string': 'string' }, 'StopJobOnFailureOptions': { 'StopJobOnFailureTiming': 'Immediate'|'AfterDataLoad' } } } }, 'ExecutionClass': 'FLEX'|'STANDARD', 'SourceControlDetails': { 'Provider': 'GITHUB'|'AWS_CODE_COMMIT', 'Repository': 'string', 'Owner': 'string', 'Branch': 'string', 'Folder': 'string', 'LastCommitId': 'string', 'AuthStrategy': 'PERSONAL_ACCESS_TOKEN'|'AWS_SECRETS_MANAGER', 'AuthToken': 'string' } } ) **Parameters** :: # This section is too large to render. # Please see the AWS API Documentation linked below. `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateJob>`_
dict
Response Syntax
{ 'JobName': 'string' }
Response Structure
(dict) --
JobName (string) --
Returns the name of the updated job definition.
{'WorkerType': {'Z.2X'}}
Updates an existing machine learning transform. Call this operation to tune the algorithm parameters to achieve better results.
After calling this operation, you can call the StartMLEvaluationTaskRun operation to assess how well your new parameters achieved your goals (such as improving the quality of your machine learning transform, or making it more cost-effective).
See also: AWS API Documentation
Request Syntax
client.update_ml_transform( TransformId='string', Name='string', Description='string', Parameters={ 'TransformType': 'FIND_MATCHES', 'FindMatchesParameters': { 'PrimaryKeyColumnName': 'string', 'PrecisionRecallTradeoff': 123.0, 'AccuracyCostTradeoff': 123.0, 'EnforceProvidedLabels': True|False } }, Role='string', GlueVersion='string', MaxCapacity=123.0, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X'|'G.4X'|'G.8X'|'Z.2X', NumberOfWorkers=123, Timeout=123, MaxRetries=123 )
string
[REQUIRED]
A unique identifier that was generated when the transform was created.
string
The unique name that you gave the transform when you created it.
string
A description of the transform. The default is an empty string.
dict
The configuration parameters that are specific to the transform type (algorithm) used. Conditionally dependent on the transform type.
TransformType (string) -- [REQUIRED]
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters (dict) --
The parameters for the find matches algorithm.
PrimaryKeyColumnName (string) --
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
PrecisionRecallTradeoff (float) --
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
AccuracyCostTradeoff (float) --
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5 means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost, which results in a less accurate FindMatches transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
EnforceProvidedLabels (boolean) --
The value to switch on or off to force the output to match the provided labels from users. If the value is True , the find matches transform forces the output to match the provided labels. The results override the normal conflation results. If the value is False , the find matches transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
string
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
string
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
float
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType field is set to a value other than Standard , the MaxCapacity field is set automatically and becomes read-only.
string
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
integer
The number of workers of a defined workerType that are allocated when this task runs.
integer
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this transform can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
integer
The maximum number of times to retry a task for this transform after a task run fails.
dict
Response Syntax
{ 'TransformId': 'string' }
Response Structure
(dict) --
TransformId (string) --
The unique identifier for the transform that was updated.