2022/08/08 - AWS Glue - 17 updated api methods
Changes Add an option to run non-urgent or non-time sensitive Glue Jobs on spare capacity
{'Jobs': {'ExecutionClass': 'FLEX | STANDARD'}}
Returns a list of resource metadata for a given list of job names. After calling the ListJobs operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_jobs( JobNames=[ 'string', ] )
list
[REQUIRED]
A list of job names, which might be the names returned from the ListJobs operation.
(string) --
dict
Response Syntax
{ 'Jobs': [ { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' } } }, 'ExecutionClass': 'FLEX'|'STANDARD' }, ], 'JobsNotFound': [ 'string', ] }
Response Structure
(dict) --
Jobs (list) --
A list of job definitions.
(dict) --
Specifies a job definition.
Name (string) --
The name you assign to this job definition.
Description (string) --
A description of the job.
LogUri (string) --
This field is reserved for future use.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
CreatedOn (datetime) --
The time and date that this job definition was created.
LastModifiedOn (datetime) --
The last point in time when this job definition was modified.
ExecutionProperty (dict) --
An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.
MaxConcurrentRuns (integer) --
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
Command (dict) --
The JobCommand that runs this job.
Name (string) --
The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell . For an Apache Spark streaming ETL job, this must be gluestreaming .
ScriptLocation (string) --
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
PythonVersion (string) --
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
DefaultArguments (dict) --
The default arguments for this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
NonOverridableArguments (dict) --
Non-overridable arguments for this job, specified as name-value pairs.
(string) --
(string) --
Connections (dict) --
The connections used for this job.
Connections (list) --
A list of connections used by the job.
(string) --
MaxRetries (integer) --
The maximum number of times to retry this job after a JobRun fails.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to runs of this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Timeout (integer) --
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job.
NotificationProperty (dict) --
Specifies configuration properties of a job notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
CodeGenConfigurationNodes (dict) --
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
(string) --
(dict) --
CodeGenConfigurationNode enumerates all valid Node types. One and only one of its member variables can be populated.
AthenaConnectorSource (dict) --
Specifies a connector to an Amazon Athena data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
ConnectionTable (string) --
The name of the table in the data source.
SchemaName (string) --
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output .
OutputSchemas (list) --
Specifies the data schema for the custom Athena source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
JDBCConnectorSource (dict) --
Specifies a connector to a JDBC data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
FilterPredicate (string) --
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate .
PartitionColumn (string) --
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound , upperBound , and numPartitions . This option works the same way as in the Spark SQL JDBC reader.
LowerBound (integer) --
The minimum value of partitionColumn that is used to decide partition stride.
UpperBound (integer) --
The maximum value of partitionColumn that is used to decide partition stride.
NumPartitions (integer) --
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn .
JobBookmarkKeys (list) --
The name of the job bookmark keys on which to sort.
(string) --
JobBookmarkKeysSortOrder (string) --
Specifies an ascending or descending sort order.
DataTypeMapping (dict) --
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
(string) --
(string) --
ConnectionTable (string) --
The name of the table in the data source.
Query (string) --
The table or SQL query to get the data from. You can specify either ConnectionTable or query , but not both.
OutputSchemas (list) --
Specifies the data schema for the custom JDBC source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorSource (dict) --
Specifies a connector to an Apache Spark data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies data schema for the custom spark source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogSource (dict) --
Specifies a data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
RedshiftSource (dict) --
Specifies an Amazon Redshift data store.
Name (string) --
The name of the Amazon Redshift data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
S3CatalogSource (dict) --
Specifies an Amazon S3 data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
S3CsvSource (dict) --
Specifies a command-separated value (CSV) data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
Separator (string) --
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
Escaper (string) --
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is none . If enabled, the character which immediately follows is used as-is, except for a small set of well-known escapes ( \n , \r , \t , and \0 ).
QuoteChar (string) --
Specifies the character to use for quoting. The default is a double quote: '"' . Set this to -1 to turn off quoting entirely.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
WithHeader (boolean) --
A Boolean value that specifies whether to treat the first line as a header. The default value is False .
WriteHeader (boolean) --
A Boolean value that specifies whether to write the header to output. The default value is True .
SkipFirst (boolean) --
A Boolean value that specifies whether to skip the first data line. The default value is False .
OptimizePerformance (boolean) --
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
OutputSchemas (list) --
Specifies the data schema for the S3 CSV source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3JsonSource (dict) --
Specifies a JSON data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
JsonPath (string) --
A JsonPath string defining the JSON data.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
OutputSchemas (list) --
Specifies the data schema for the S3 JSON source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3ParquetSource (dict) --
Specifies an Apache Parquet data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
OutputSchemas (list) --
Specifies the data schema for the S3 Parquet source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
RelationalCatalogSource (dict) --
Specifies a Relational database data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
DynamoDBCatalogSource (dict) --
Specifies a DynamoDB data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
JDBCConnectorTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectionTable (string) --
The name of the table in the data target.
ConnectorName (string) --
The name of a connector that will be used.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the JDBC target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorTarget (dict) --
Specifies a target that uses an Apache Spark connector.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of a connection for an Apache Spark connector.
ConnectorName (string) --
The name of an Apache Spark connector.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the custom spark target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogTarget (dict) --
Specifies a target that uses a Glue Data Catalog table.
Name (string) --
The name of your data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
Table (string) --
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
RedshiftTarget (dict) --
Specifies a target that uses Amazon Redshift.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
UpsertRedshiftOptions (dict) --
The set of options to configure an upsert operation when writing to a Redshift target.
TableLocation (string) --
The physical location of the Redshift table.
ConnectionName (string) --
The name of the connection to use to write to Redshift.
UpsertKeys (list) --
The keys used to determine whether to perform an update or insert.
(string) --
S3CatalogTarget (dict) --
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
S3GlueParquetTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
S3DirectTarget (dict) --
Specifies a data target that writes to Amazon S3.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Format (string) --
Specifies the data output format for the target.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
ApplyMapping (dict) --
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Mapping (list) --
Specifies the mapping of data property keys in the data source to data property keys in the data target.
(dict) --
Specifies the mapping of data property keys.
ToKey (string) --
After the apply mapping, what the name of the column should be. Can be the same as FromPath .
FromPath (list) --
The table or column to be modified.
(string) --
FromType (string) --
The type of the data to be modified.
ToType (string) --
The data type that the data is to be modified to.
Dropped (boolean) --
If true, then the column is removed.
Children (list) --
Only applicable to nested data structures. If you want to change the parent structure, but also one of its children, you can fill out this data strucutre. It is also Mapping , but its FromPath will be the parent's FromPath plus the FromPath from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
SelectFields (dict) --
Specifies a transform that chooses the data property keys that you want to keep.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
DropFields (dict) --
Specifies a transform that chooses the data property keys that you want to drop.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
RenameField (dict) --
Specifies a transform that renames a single data property key.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
SourcePath (list) --
A JSON path to a variable in the data structure for the source data.
(string) --
TargetPath (list) --
A JSON path to a variable in the data structure for the target data.
(string) --
Spigot (dict) --
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Path (string) --
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Topk (integer) --
Specifies a number of records to write starting from the beginning of the dataset.
Prob (float) --
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
Join (dict) --
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
JoinType (string) --
Specifies the type of join to be performed on the datasets.
Columns (list) --
A list of the two columns to be joined.
(dict) --
Specifies a column to be joined.
From (string) --
The column to be joined.
Keys (list) --
The key of the column to be joined.
(list) --
(string) --
SplitFields (dict) --
Specifies a transform that splits data property keys into two DynamicFrames . The output is a collection of DynamicFrames : one with selected data property keys, and one with the remaining data property keys.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
SelectFromCollection (dict) --
Specifies a transform that chooses one DynamicFrame from a collection of DynamicFrames . The output is the selected DynamicFrame
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Index (integer) --
The index for the DynamicFrame to be selected.
FillMissingValues (dict) --
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
ImputedPath (string) --
A JSON path to a variable in the data structure for the dataset that is imputed.
FilledPath (string) --
A JSON path to a variable in the data structure for the dataset that is filled.
Filter (dict) --
Specifies a transform that splits a dataset into two, based on a filter condition.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
LogicalOperator (string) --
The operator used to filter rows by comparing the key value to a specified value.
Filters (list) --
Specifies a filter expression.
(dict) --
Specifies a filter expression.
Operation (string) --
The type of operation to perform in the expression.
Negated (boolean) --
Whether the expression is to be negated.
Values (list) --
A list of filter values.
(dict) --
Represents a single entry in the list of values for a FilterExpression .
Type (string) --
The type of filter value.
Value (list) --
The value to be associated.
(string) --
CustomCode (dict) --
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Code (string) --
The custom code that is used to perform the data transformation.
ClassName (string) --
The name defined for the custom code node class.
OutputSchemas (list) --
Specifies the data schema for the custom code transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkSQL (dict) --
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame .
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
(string) --
SqlQuery (string) --
A SQL query that must use Spark SQL syntax and return a single data set.
SqlAliases (list) --
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
(dict) --
Represents a single entry in the list of values for SqlAliases .
From (string) --
A table, or a column in a table.
Alias (string) --
A temporary name given to a table, or a column in a table.
OutputSchemas (list) --
Specifies the data schema for the SparkSQL transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
DirectKinesisSource (dict) --
Specifies a direct Amazon Kinesis data source.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DirectKafkaSource (dict) --
Specifies an Apache Kafka data store.
Name (string) --
The name of the data store.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKinesisSource (dict) --
Specifies a Kinesis data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKafkaSource (dict) --
Specifies an Apache Kafka data store in the Data Catalog.
Name (string) --
The name of the data store.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DropNullFields (dict) --
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
NullCheckBoxList (dict) --
A structure that represents whether certain values are recognized as null values for removal.
IsEmpty (boolean) --
Specifies that an empty string is considered as a null value.
IsNullString (boolean) --
Specifies that a value spelling out the word 'null' is considered as a null value.
IsNegOne (boolean) --
Specifies that an integer value of -1 is considered as a null value.
NullTextList (list) --
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields transform removes custom null values only if both the value of the null placeholder and the datatype match the data.
(dict) --
Represents a custom null value such as a zeros or other value being used as a null placeholder unique to the dataset.
Value (string) --
The value of the null placeholder.
Datatype (dict) --
The datatype of the value.
Id (string) --
The datatype of the value.
Label (string) --
A label assigned to the datatype.
Merge (dict) --
Specifies a transform that merges a DynamicFrame with a staging DynamicFrame based on the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not de-duplicated.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Source (string) --
The source DynamicFrame that will be merged with a staging DynamicFrame .
PrimaryKeys (list) --
The list of primary key fields to match records from the source and staging dynamic frames.
(list) --
(string) --
Union (dict) --
Specifies a transform that combines the rows from two or more datasets into a single result.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
UnionType (string) --
Indicates the type of Union transform.
Specify ALL to join all rows from data sources to the resulting DynamicFrame. The resulting union does not remove duplicate rows.
Specify DISTINCT to remove duplicate rows in the resulting DynamicFrame.
PIIDetection (dict) --
Specifies a transform that identifies, removes or masks PII data.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
PiiType (string) --
Indicates the type of PIIDetection transform.
EntityTypesToDetect (list) --
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
(string) --
OutputColumnName (string) --
Indicates the output column name that will contain any entity type detected in that row.
SampleFraction (float) --
Indicates the fraction of the data to sample when scanning for PII entities.
ThresholdFraction (float) --
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
MaskValue (string) --
Indicates the value that will replace the detected entity.
Aggregate (dict) --
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
Name (string) --
The name of the transform node.
Inputs (list) --
Specifies the fields and rows to use as inputs for the aggregate transform.
(string) --
Groups (list) --
Specifies the fields to group by.
(list) --
(string) --
Aggs (list) --
Specifies the aggregate functions to be performed on specified fields.
(dict) --
Specifies the set of parameters needed to perform aggregation in the aggregate transform.
Column (list) --
Specifies the column on the data set on which the aggregation function will be applied.
(string) --
AggFunc (string) --
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
DropDuplicates (dict) --
Specifies a transform that removes rows of repeating data from a data set.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Columns (list) --
The name of the columns to be merged or removed if repeating.
(list) --
(string) --
GovernedCatalogTarget (dict) --
Specifies a data target that writes to a goverened catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the governed catalog.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
GovernedCatalogSource (dict) --
Specifies a data source in a goverened Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
MicrosoftSQLServerCatalogSource (dict) --
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MySQLCatalogSource (dict) --
Specifies a MySQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
OracleSQLCatalogSource (dict) --
Specifies an Oracle data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
PostgreSQLCatalogSource (dict) --
Specifies a PostgresSQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MicrosoftSQLServerCatalogTarget (dict) --
Specifies a target that uses Microsoft SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
MySQLCatalogTarget (dict) --
Specifies a target that uses MySQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
OracleSQLCatalogTarget (dict) --
Specifies a target that uses Oracle SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
PostgreSQLCatalogTarget (dict) --
Specifies a target that uses Postgres SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
JobsNotFound (list) --
A list of names of jobs not found.
(string) --
{'Triggers': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'WAITING', 'ERROR'}}}}}
Returns a list of resource metadata for a given list of trigger names. After calling the ListTriggers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_triggers( TriggerNames=[ 'string', ] )
list
[REQUIRED]
A list of trigger names, which may be the names returned from the ListTriggers operation.
(string) --
dict
Response Syntax
{ 'Triggers': [ { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, ], 'TriggersNotFound': [ 'string', ] }
Response Structure
(dict) --
Triggers (list) --
A list of trigger definitions.
(dict) --
Information about a specific trigger.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
TriggersNotFound (list) --
A list of names of triggers not found.
(string) --
{'Workflows': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'LastRun': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'Statistics': {'ErroredActions': 'integer', 'WaitingActions': 'integer'}}}}
Returns a list of resource metadata for a given list of workflow names. After calling the ListWorkflows operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.
See also: AWS API Documentation
Request Syntax
client.batch_get_workflows( Names=[ 'string', ], IncludeGraph=True|False )
list
[REQUIRED]
A list of workflow names, which may be the names returned from the ListWorkflows operation.
(string) --
boolean
Specifies whether to include a graph when returning the workflow resource metadata.
dict
Response Syntax
{ 'Workflows': [ { 'Name': 'string', 'Description': 'string', 'DefaultRunProperties': { 'string': 'string' }, 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'LastRun': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'MaxConcurrentRuns': 123, 'BlueprintDetails': { 'BlueprintName': 'string', 'RunId': 'string' } }, ], 'MissingWorkflows': [ 'string', ] }
Response Structure
(dict) --
Workflows (list) --
A list of workflow resource metadata.
(dict) --
A workflow is a collection of multiple dependent Glue jobs and crawlers that are run to complete a complex ETL task. A workflow manages the execution and monitoring of all its jobs and crawlers.
Name (string) --
The name of the workflow.
Description (string) --
A description of the workflow.
DefaultRunProperties (dict) --
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
(string) --
(string) --
CreatedOn (datetime) --
The date and time when the workflow was created.
LastModifiedOn (datetime) --
The date and time when the workflow was last modified.
LastRun (dict) --
The information about the last execution of the workflow.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
MaxConcurrentRuns (integer) --
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails (dict) --
This structure indicates the details of the blueprint that this particular workflow is created from.
BlueprintName (string) --
The name of the blueprint.
RunId (string) --
The run ID for this blueprint.
MissingWorkflows (list) --
A list of names of workflows not found.
(string) --
{'ExecutionClass': 'FLEX | STANDARD'}
Creates a new job definition.
See also: AWS API Documentation
Request Syntax
client.create_job( Name='string', Description='string', LogUri='string', Role='string', ExecutionProperty={ 'MaxConcurrentRuns': 123 }, Command={ 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string' }, DefaultArguments={ 'string': 'string' }, NonOverridableArguments={ 'string': 'string' }, Connections={ 'Connections': [ 'string', ] }, MaxRetries=123, AllocatedCapacity=123, Timeout=123, MaxCapacity=123.0, SecurityConfiguration='string', Tags={ 'string': 'string' }, NotificationProperty={ 'NotifyDelayAfter': 123 }, GlueVersion='string', NumberOfWorkers=123, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X', CodeGenConfigurationNodes={ 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' } } }, ExecutionClass='FLEX'|'STANDARD' )
string
[REQUIRED]
The name you assign to this job definition. It must be unique in your account.
string
Description of the job being defined.
string
This field is reserved for future use.
string
[REQUIRED]
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
dict
An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.
MaxConcurrentRuns (integer) --
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
dict
[REQUIRED]
The JobCommand that runs this job.
Name (string) --
The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell . For an Apache Spark streaming ETL job, this must be gluestreaming .
ScriptLocation (string) --
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
PythonVersion (string) --
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
dict
The default arguments for this job.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
dict
Non-overridable arguments for this job, specified as name-value pairs.
(string) --
(string) --
dict
The connections used for this job.
Connections (list) --
A list of connections used by the job.
(string) --
integer
The maximum number of times to retry this job if it fails.
integer
This parameter is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
integer
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
float
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
string
The name of the SecurityConfiguration structure to be used with this job.
dict
The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
(string) --
(string) --
dict
Specifies configuration properties of a job notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
string
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
integer
The number of workers of a defined workerType that are allocated when a job runs.
string
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
dict
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
(string) --
(dict) --
CodeGenConfigurationNode enumerates all valid Node types. One and only one of its member variables can be populated.
AthenaConnectorSource (dict) --
Specifies a connector to an Amazon Athena data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
ConnectionTable (string) --
The name of the table in the data source.
SchemaName (string) -- [REQUIRED]
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output .
OutputSchemas (list) --
Specifies the data schema for the custom Athena source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
JDBCConnectorSource (dict) --
Specifies a connector to a JDBC data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
FilterPredicate (string) --
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate .
PartitionColumn (string) --
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound , upperBound , and numPartitions . This option works the same way as in the Spark SQL JDBC reader.
LowerBound (integer) --
The minimum value of partitionColumn that is used to decide partition stride.
UpperBound (integer) --
The maximum value of partitionColumn that is used to decide partition stride.
NumPartitions (integer) --
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn .
JobBookmarkKeys (list) --
The name of the job bookmark keys on which to sort.
(string) --
JobBookmarkKeysSortOrder (string) --
Specifies an ascending or descending sort order.
DataTypeMapping (dict) --
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
(string) --
(string) --
ConnectionTable (string) --
The name of the table in the data source.
Query (string) --
The table or SQL query to get the data from. You can specify either ConnectionTable or query , but not both.
OutputSchemas (list) --
Specifies the data schema for the custom JDBC source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorSource (dict) --
Specifies a connector to an Apache Spark data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies data schema for the custom spark source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogSource (dict) --
Specifies a data store in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
RedshiftSource (dict) --
Specifies an Amazon Redshift data store.
Name (string) -- [REQUIRED]
The name of the Amazon Redshift data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
S3CatalogSource (dict) --
Specifies an Amazon S3 data store in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
S3CsvSource (dict) --
Specifies a command-separated value (CSV) data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
Separator (string) -- [REQUIRED]
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
Escaper (string) --
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is none . If enabled, the character which immediately follows is used as-is, except for a small set of well-known escapes ( \n , \r , \t , and \0 ).
QuoteChar (string) -- [REQUIRED]
Specifies the character to use for quoting. The default is a double quote: '"' . Set this to -1 to turn off quoting entirely.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
WithHeader (boolean) --
A Boolean value that specifies whether to treat the first line as a header. The default value is False .
WriteHeader (boolean) --
A Boolean value that specifies whether to write the header to output. The default value is True .
SkipFirst (boolean) --
A Boolean value that specifies whether to skip the first data line. The default value is False .
OptimizePerformance (boolean) --
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
OutputSchemas (list) --
Specifies the data schema for the S3 CSV source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3JsonSource (dict) --
Specifies a JSON data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
JsonPath (string) --
A JsonPath string defining the JSON data.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
OutputSchemas (list) --
Specifies the data schema for the S3 JSON source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3ParquetSource (dict) --
Specifies an Apache Parquet data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
OutputSchemas (list) --
Specifies the data schema for the S3 Parquet source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
RelationalCatalogSource (dict) --
Specifies a Relational database data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
DynamoDBCatalogSource (dict) --
Specifies a DynamoDB data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
JDBCConnectorTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectionTable (string) -- [REQUIRED]
The name of the table in the data target.
ConnectorName (string) -- [REQUIRED]
The name of a connector that will be used.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the JDBC target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorTarget (dict) --
Specifies a target that uses an Apache Spark connector.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) -- [REQUIRED]
The name of a connection for an Apache Spark connector.
ConnectorName (string) -- [REQUIRED]
The name of an Apache Spark connector.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the custom spark target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogTarget (dict) --
Specifies a target that uses a Glue Data Catalog table.
Name (string) -- [REQUIRED]
The name of your data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
Table (string) -- [REQUIRED]
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
RedshiftTarget (dict) --
Specifies a target that uses Amazon Redshift.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
UpsertRedshiftOptions (dict) --
The set of options to configure an upsert operation when writing to a Redshift target.
TableLocation (string) --
The physical location of the Redshift table.
ConnectionName (string) --
The name of the connection to use to write to Redshift.
UpsertKeys (list) --
The keys used to determine whether to perform an update or insert.
(string) --
S3CatalogTarget (dict) --
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
Database (string) -- [REQUIRED]
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
S3GlueParquetTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) -- [REQUIRED]
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
S3DirectTarget (dict) --
Specifies a data target that writes to Amazon S3.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) -- [REQUIRED]
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Format (string) -- [REQUIRED]
Specifies the data output format for the target.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
ApplyMapping (dict) --
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Mapping (list) -- [REQUIRED]
Specifies the mapping of data property keys in the data source to data property keys in the data target.
(dict) --
Specifies the mapping of data property keys.
ToKey (string) --
After the apply mapping, what the name of the column should be. Can be the same as FromPath .
FromPath (list) --
The table or column to be modified.
(string) --
FromType (string) --
The type of the data to be modified.
ToType (string) --
The data type that the data is to be modified to.
Dropped (boolean) --
If true, then the column is removed.
Children (list) --
Only applicable to nested data structures. If you want to change the parent structure, but also one of its children, you can fill out this data strucutre. It is also Mapping , but its FromPath will be the parent's FromPath plus the FromPath from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
SelectFields (dict) --
Specifies a transform that chooses the data property keys that you want to keep.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
DropFields (dict) --
Specifies a transform that chooses the data property keys that you want to drop.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
RenameField (dict) --
Specifies a transform that renames a single data property key.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
SourcePath (list) -- [REQUIRED]
A JSON path to a variable in the data structure for the source data.
(string) --
TargetPath (list) -- [REQUIRED]
A JSON path to a variable in the data structure for the target data.
(string) --
Spigot (dict) --
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Path (string) -- [REQUIRED]
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Topk (integer) --
Specifies a number of records to write starting from the beginning of the dataset.
Prob (float) --
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
Join (dict) --
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
JoinType (string) -- [REQUIRED]
Specifies the type of join to be performed on the datasets.
Columns (list) -- [REQUIRED]
A list of the two columns to be joined.
(dict) --
Specifies a column to be joined.
From (string) -- [REQUIRED]
The column to be joined.
Keys (list) -- [REQUIRED]
The key of the column to be joined.
(list) --
(string) --
SplitFields (dict) --
Specifies a transform that splits data property keys into two DynamicFrames . The output is a collection of DynamicFrames : one with selected data property keys, and one with the remaining data property keys.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
SelectFromCollection (dict) --
Specifies a transform that chooses one DynamicFrame from a collection of DynamicFrames . The output is the selected DynamicFrame
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Index (integer) -- [REQUIRED]
The index for the DynamicFrame to be selected.
FillMissingValues (dict) --
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
ImputedPath (string) -- [REQUIRED]
A JSON path to a variable in the data structure for the dataset that is imputed.
FilledPath (string) --
A JSON path to a variable in the data structure for the dataset that is filled.
Filter (dict) --
Specifies a transform that splits a dataset into two, based on a filter condition.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
LogicalOperator (string) -- [REQUIRED]
The operator used to filter rows by comparing the key value to a specified value.
Filters (list) -- [REQUIRED]
Specifies a filter expression.
(dict) --
Specifies a filter expression.
Operation (string) -- [REQUIRED]
The type of operation to perform in the expression.
Negated (boolean) --
Whether the expression is to be negated.
Values (list) -- [REQUIRED]
A list of filter values.
(dict) --
Represents a single entry in the list of values for a FilterExpression .
Type (string) -- [REQUIRED]
The type of filter value.
Value (list) -- [REQUIRED]
The value to be associated.
(string) --
CustomCode (dict) --
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Code (string) -- [REQUIRED]
The custom code that is used to perform the data transformation.
ClassName (string) -- [REQUIRED]
The name defined for the custom code node class.
OutputSchemas (list) --
Specifies the data schema for the custom code transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkSQL (dict) --
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame .
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
(string) --
SqlQuery (string) -- [REQUIRED]
A SQL query that must use Spark SQL syntax and return a single data set.
SqlAliases (list) -- [REQUIRED]
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
(dict) --
Represents a single entry in the list of values for SqlAliases .
From (string) -- [REQUIRED]
A table, or a column in a table.
Alias (string) -- [REQUIRED]
A temporary name given to a table, or a column in a table.
OutputSchemas (list) --
Specifies the data schema for the SparkSQL transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
DirectKinesisSource (dict) --
Specifies a direct Amazon Kinesis data source.
Name (string) -- [REQUIRED]
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DirectKafkaSource (dict) --
Specifies an Apache Kafka data store.
Name (string) -- [REQUIRED]
The name of the data store.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKinesisSource (dict) --
Specifies a Kinesis data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
Database (string) -- [REQUIRED]
The name of the database to read from.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKafkaSource (dict) --
Specifies an Apache Kafka data store in the Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
Database (string) -- [REQUIRED]
The name of the database to read from.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DropNullFields (dict) --
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
NullCheckBoxList (dict) --
A structure that represents whether certain values are recognized as null values for removal.
IsEmpty (boolean) --
Specifies that an empty string is considered as a null value.
IsNullString (boolean) --
Specifies that a value spelling out the word 'null' is considered as a null value.
IsNegOne (boolean) --
Specifies that an integer value of -1 is considered as a null value.
NullTextList (list) --
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields transform removes custom null values only if both the value of the null placeholder and the datatype match the data.
(dict) --
Represents a custom null value such as a zeros or other value being used as a null placeholder unique to the dataset.
Value (string) -- [REQUIRED]
The value of the null placeholder.
Datatype (dict) -- [REQUIRED]
The datatype of the value.
Id (string) -- [REQUIRED]
The datatype of the value.
Label (string) -- [REQUIRED]
A label assigned to the datatype.
Merge (dict) --
Specifies a transform that merges a DynamicFrame with a staging DynamicFrame based on the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not de-duplicated.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Source (string) -- [REQUIRED]
The source DynamicFrame that will be merged with a staging DynamicFrame .
PrimaryKeys (list) -- [REQUIRED]
The list of primary key fields to match records from the source and staging dynamic frames.
(list) --
(string) --
Union (dict) --
Specifies a transform that combines the rows from two or more datasets into a single result.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The node ID inputs to the transform.
(string) --
UnionType (string) -- [REQUIRED]
Indicates the type of Union transform.
Specify ALL to join all rows from data sources to the resulting DynamicFrame. The resulting union does not remove duplicate rows.
Specify DISTINCT to remove duplicate rows in the resulting DynamicFrame.
PIIDetection (dict) --
Specifies a transform that identifies, removes or masks PII data.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The node ID inputs to the transform.
(string) --
PiiType (string) -- [REQUIRED]
Indicates the type of PIIDetection transform.
EntityTypesToDetect (list) -- [REQUIRED]
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
(string) --
OutputColumnName (string) --
Indicates the output column name that will contain any entity type detected in that row.
SampleFraction (float) --
Indicates the fraction of the data to sample when scanning for PII entities.
ThresholdFraction (float) --
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
MaskValue (string) --
Indicates the value that will replace the detected entity.
Aggregate (dict) --
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
Specifies the fields and rows to use as inputs for the aggregate transform.
(string) --
Groups (list) -- [REQUIRED]
Specifies the fields to group by.
(list) --
(string) --
Aggs (list) -- [REQUIRED]
Specifies the aggregate functions to be performed on specified fields.
(dict) --
Specifies the set of parameters needed to perform aggregation in the aggregate transform.
Column (list) -- [REQUIRED]
Specifies the column on the data set on which the aggregation function will be applied.
(string) --
AggFunc (string) -- [REQUIRED]
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
DropDuplicates (dict) --
Specifies a transform that removes rows of repeating data from a data set.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Columns (list) --
The name of the columns to be merged or removed if repeating.
(list) --
(string) --
GovernedCatalogTarget (dict) --
Specifies a data target that writes to a goverened catalog.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
Database (string) -- [REQUIRED]
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the governed catalog.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
GovernedCatalogSource (dict) --
Specifies a data source in a goverened Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
MicrosoftSQLServerCatalogSource (dict) --
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
MySQLCatalogSource (dict) --
Specifies a MySQL data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
OracleSQLCatalogSource (dict) --
Specifies an Oracle data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
PostgreSQLCatalogSource (dict) --
Specifies a PostgresSQL data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
MicrosoftSQLServerCatalogTarget (dict) --
Specifies a target that uses Microsoft SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
MySQLCatalogTarget (dict) --
Specifies a target that uses MySQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
OracleSQLCatalogTarget (dict) --
Specifies a target that uses Oracle SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
PostgreSQLCatalogTarget (dict) --
Specifies a target that uses Postgres SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
string
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
dict
Response Syntax
{ 'Name': 'string' }
Response Structure
(dict) --
Name (string) --
The unique name that was provided for this job definition.
{'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'WAITING', 'ERROR'}}}}
Creates a new trigger.
See also: AWS API Documentation
Request Syntax
client.create_trigger( Name='string', WorkflowName='string', Type='SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', Schedule='string', Predicate={ 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, Actions=[ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], Description='string', StartOnCreation=True|False, Tags={ 'string': 'string' }, EventBatchingCondition={ 'BatchSize': 123, 'BatchWindow': 123 } )
string
[REQUIRED]
The name of the trigger.
string
The name of the workflow associated with the trigger.
string
[REQUIRED]
The type of the new trigger.
string
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
This field is required when the trigger type is SCHEDULED.
dict
A predicate to specify when the new trigger should fire.
This field is required when the trigger type is CONDITIONAL .
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
list
[REQUIRED]
The actions initiated by this trigger when it fires.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
string
A description of the new trigger.
boolean
Set to true to start SCHEDULED and CONDITIONAL triggers when created. True is not supported for ON_DEMAND triggers.
dict
The tags to use with this trigger. You may use tags to limit access to the trigger. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
(string) --
(string) --
dict
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) -- [REQUIRED]
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
dict
Response Syntax
{ 'Name': 'string' }
Response Structure
(dict) --
Name (string) --
The name of the trigger.
{'Job': {'ExecutionClass': 'FLEX | STANDARD'}}
Retrieves an existing job definition.
See also: AWS API Documentation
Request Syntax
client.get_job( JobName='string' )
string
[REQUIRED]
The name of the job definition to retrieve.
dict
Response Syntax
{ 'Job': { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' } } }, 'ExecutionClass': 'FLEX'|'STANDARD' } }
Response Structure
(dict) --
Job (dict) --
The requested job definition.
Name (string) --
The name you assign to this job definition.
Description (string) --
A description of the job.
LogUri (string) --
This field is reserved for future use.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
CreatedOn (datetime) --
The time and date that this job definition was created.
LastModifiedOn (datetime) --
The last point in time when this job definition was modified.
ExecutionProperty (dict) --
An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.
MaxConcurrentRuns (integer) --
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
Command (dict) --
The JobCommand that runs this job.
Name (string) --
The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell . For an Apache Spark streaming ETL job, this must be gluestreaming .
ScriptLocation (string) --
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
PythonVersion (string) --
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
DefaultArguments (dict) --
The default arguments for this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
NonOverridableArguments (dict) --
Non-overridable arguments for this job, specified as name-value pairs.
(string) --
(string) --
Connections (dict) --
The connections used for this job.
Connections (list) --
A list of connections used by the job.
(string) --
MaxRetries (integer) --
The maximum number of times to retry this job after a JobRun fails.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to runs of this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Timeout (integer) --
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job.
NotificationProperty (dict) --
Specifies configuration properties of a job notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
CodeGenConfigurationNodes (dict) --
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
(string) --
(dict) --
CodeGenConfigurationNode enumerates all valid Node types. One and only one of its member variables can be populated.
AthenaConnectorSource (dict) --
Specifies a connector to an Amazon Athena data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
ConnectionTable (string) --
The name of the table in the data source.
SchemaName (string) --
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output .
OutputSchemas (list) --
Specifies the data schema for the custom Athena source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
JDBCConnectorSource (dict) --
Specifies a connector to a JDBC data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
FilterPredicate (string) --
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate .
PartitionColumn (string) --
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound , upperBound , and numPartitions . This option works the same way as in the Spark SQL JDBC reader.
LowerBound (integer) --
The minimum value of partitionColumn that is used to decide partition stride.
UpperBound (integer) --
The maximum value of partitionColumn that is used to decide partition stride.
NumPartitions (integer) --
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn .
JobBookmarkKeys (list) --
The name of the job bookmark keys on which to sort.
(string) --
JobBookmarkKeysSortOrder (string) --
Specifies an ascending or descending sort order.
DataTypeMapping (dict) --
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
(string) --
(string) --
ConnectionTable (string) --
The name of the table in the data source.
Query (string) --
The table or SQL query to get the data from. You can specify either ConnectionTable or query , but not both.
OutputSchemas (list) --
Specifies the data schema for the custom JDBC source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorSource (dict) --
Specifies a connector to an Apache Spark data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies data schema for the custom spark source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogSource (dict) --
Specifies a data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
RedshiftSource (dict) --
Specifies an Amazon Redshift data store.
Name (string) --
The name of the Amazon Redshift data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
S3CatalogSource (dict) --
Specifies an Amazon S3 data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
S3CsvSource (dict) --
Specifies a command-separated value (CSV) data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
Separator (string) --
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
Escaper (string) --
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is none . If enabled, the character which immediately follows is used as-is, except for a small set of well-known escapes ( \n , \r , \t , and \0 ).
QuoteChar (string) --
Specifies the character to use for quoting. The default is a double quote: '"' . Set this to -1 to turn off quoting entirely.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
WithHeader (boolean) --
A Boolean value that specifies whether to treat the first line as a header. The default value is False .
WriteHeader (boolean) --
A Boolean value that specifies whether to write the header to output. The default value is True .
SkipFirst (boolean) --
A Boolean value that specifies whether to skip the first data line. The default value is False .
OptimizePerformance (boolean) --
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
OutputSchemas (list) --
Specifies the data schema for the S3 CSV source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3JsonSource (dict) --
Specifies a JSON data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
JsonPath (string) --
A JsonPath string defining the JSON data.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
OutputSchemas (list) --
Specifies the data schema for the S3 JSON source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3ParquetSource (dict) --
Specifies an Apache Parquet data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
OutputSchemas (list) --
Specifies the data schema for the S3 Parquet source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
RelationalCatalogSource (dict) --
Specifies a Relational database data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
DynamoDBCatalogSource (dict) --
Specifies a DynamoDB data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
JDBCConnectorTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectionTable (string) --
The name of the table in the data target.
ConnectorName (string) --
The name of a connector that will be used.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the JDBC target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorTarget (dict) --
Specifies a target that uses an Apache Spark connector.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of a connection for an Apache Spark connector.
ConnectorName (string) --
The name of an Apache Spark connector.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the custom spark target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogTarget (dict) --
Specifies a target that uses a Glue Data Catalog table.
Name (string) --
The name of your data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
Table (string) --
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
RedshiftTarget (dict) --
Specifies a target that uses Amazon Redshift.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
UpsertRedshiftOptions (dict) --
The set of options to configure an upsert operation when writing to a Redshift target.
TableLocation (string) --
The physical location of the Redshift table.
ConnectionName (string) --
The name of the connection to use to write to Redshift.
UpsertKeys (list) --
The keys used to determine whether to perform an update or insert.
(string) --
S3CatalogTarget (dict) --
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
S3GlueParquetTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
S3DirectTarget (dict) --
Specifies a data target that writes to Amazon S3.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Format (string) --
Specifies the data output format for the target.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
ApplyMapping (dict) --
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Mapping (list) --
Specifies the mapping of data property keys in the data source to data property keys in the data target.
(dict) --
Specifies the mapping of data property keys.
ToKey (string) --
After the apply mapping, what the name of the column should be. Can be the same as FromPath .
FromPath (list) --
The table or column to be modified.
(string) --
FromType (string) --
The type of the data to be modified.
ToType (string) --
The data type that the data is to be modified to.
Dropped (boolean) --
If true, then the column is removed.
Children (list) --
Only applicable to nested data structures. If you want to change the parent structure, but also one of its children, you can fill out this data strucutre. It is also Mapping , but its FromPath will be the parent's FromPath plus the FromPath from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
SelectFields (dict) --
Specifies a transform that chooses the data property keys that you want to keep.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
DropFields (dict) --
Specifies a transform that chooses the data property keys that you want to drop.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
RenameField (dict) --
Specifies a transform that renames a single data property key.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
SourcePath (list) --
A JSON path to a variable in the data structure for the source data.
(string) --
TargetPath (list) --
A JSON path to a variable in the data structure for the target data.
(string) --
Spigot (dict) --
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Path (string) --
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Topk (integer) --
Specifies a number of records to write starting from the beginning of the dataset.
Prob (float) --
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
Join (dict) --
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
JoinType (string) --
Specifies the type of join to be performed on the datasets.
Columns (list) --
A list of the two columns to be joined.
(dict) --
Specifies a column to be joined.
From (string) --
The column to be joined.
Keys (list) --
The key of the column to be joined.
(list) --
(string) --
SplitFields (dict) --
Specifies a transform that splits data property keys into two DynamicFrames . The output is a collection of DynamicFrames : one with selected data property keys, and one with the remaining data property keys.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
SelectFromCollection (dict) --
Specifies a transform that chooses one DynamicFrame from a collection of DynamicFrames . The output is the selected DynamicFrame
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Index (integer) --
The index for the DynamicFrame to be selected.
FillMissingValues (dict) --
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
ImputedPath (string) --
A JSON path to a variable in the data structure for the dataset that is imputed.
FilledPath (string) --
A JSON path to a variable in the data structure for the dataset that is filled.
Filter (dict) --
Specifies a transform that splits a dataset into two, based on a filter condition.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
LogicalOperator (string) --
The operator used to filter rows by comparing the key value to a specified value.
Filters (list) --
Specifies a filter expression.
(dict) --
Specifies a filter expression.
Operation (string) --
The type of operation to perform in the expression.
Negated (boolean) --
Whether the expression is to be negated.
Values (list) --
A list of filter values.
(dict) --
Represents a single entry in the list of values for a FilterExpression .
Type (string) --
The type of filter value.
Value (list) --
The value to be associated.
(string) --
CustomCode (dict) --
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Code (string) --
The custom code that is used to perform the data transformation.
ClassName (string) --
The name defined for the custom code node class.
OutputSchemas (list) --
Specifies the data schema for the custom code transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkSQL (dict) --
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame .
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
(string) --
SqlQuery (string) --
A SQL query that must use Spark SQL syntax and return a single data set.
SqlAliases (list) --
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
(dict) --
Represents a single entry in the list of values for SqlAliases .
From (string) --
A table, or a column in a table.
Alias (string) --
A temporary name given to a table, or a column in a table.
OutputSchemas (list) --
Specifies the data schema for the SparkSQL transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
DirectKinesisSource (dict) --
Specifies a direct Amazon Kinesis data source.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DirectKafkaSource (dict) --
Specifies an Apache Kafka data store.
Name (string) --
The name of the data store.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKinesisSource (dict) --
Specifies a Kinesis data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKafkaSource (dict) --
Specifies an Apache Kafka data store in the Data Catalog.
Name (string) --
The name of the data store.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DropNullFields (dict) --
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
NullCheckBoxList (dict) --
A structure that represents whether certain values are recognized as null values for removal.
IsEmpty (boolean) --
Specifies that an empty string is considered as a null value.
IsNullString (boolean) --
Specifies that a value spelling out the word 'null' is considered as a null value.
IsNegOne (boolean) --
Specifies that an integer value of -1 is considered as a null value.
NullTextList (list) --
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields transform removes custom null values only if both the value of the null placeholder and the datatype match the data.
(dict) --
Represents a custom null value such as a zeros or other value being used as a null placeholder unique to the dataset.
Value (string) --
The value of the null placeholder.
Datatype (dict) --
The datatype of the value.
Id (string) --
The datatype of the value.
Label (string) --
A label assigned to the datatype.
Merge (dict) --
Specifies a transform that merges a DynamicFrame with a staging DynamicFrame based on the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not de-duplicated.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Source (string) --
The source DynamicFrame that will be merged with a staging DynamicFrame .
PrimaryKeys (list) --
The list of primary key fields to match records from the source and staging dynamic frames.
(list) --
(string) --
Union (dict) --
Specifies a transform that combines the rows from two or more datasets into a single result.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
UnionType (string) --
Indicates the type of Union transform.
Specify ALL to join all rows from data sources to the resulting DynamicFrame. The resulting union does not remove duplicate rows.
Specify DISTINCT to remove duplicate rows in the resulting DynamicFrame.
PIIDetection (dict) --
Specifies a transform that identifies, removes or masks PII data.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
PiiType (string) --
Indicates the type of PIIDetection transform.
EntityTypesToDetect (list) --
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
(string) --
OutputColumnName (string) --
Indicates the output column name that will contain any entity type detected in that row.
SampleFraction (float) --
Indicates the fraction of the data to sample when scanning for PII entities.
ThresholdFraction (float) --
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
MaskValue (string) --
Indicates the value that will replace the detected entity.
Aggregate (dict) --
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
Name (string) --
The name of the transform node.
Inputs (list) --
Specifies the fields and rows to use as inputs for the aggregate transform.
(string) --
Groups (list) --
Specifies the fields to group by.
(list) --
(string) --
Aggs (list) --
Specifies the aggregate functions to be performed on specified fields.
(dict) --
Specifies the set of parameters needed to perform aggregation in the aggregate transform.
Column (list) --
Specifies the column on the data set on which the aggregation function will be applied.
(string) --
AggFunc (string) --
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
DropDuplicates (dict) --
Specifies a transform that removes rows of repeating data from a data set.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Columns (list) --
The name of the columns to be merged or removed if repeating.
(list) --
(string) --
GovernedCatalogTarget (dict) --
Specifies a data target that writes to a goverened catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the governed catalog.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
GovernedCatalogSource (dict) --
Specifies a data source in a goverened Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
MicrosoftSQLServerCatalogSource (dict) --
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MySQLCatalogSource (dict) --
Specifies a MySQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
OracleSQLCatalogSource (dict) --
Specifies an Oracle data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
PostgreSQLCatalogSource (dict) --
Specifies a PostgresSQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MicrosoftSQLServerCatalogTarget (dict) --
Specifies a target that uses Microsoft SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
MySQLCatalogTarget (dict) --
Specifies a target that uses MySQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
OracleSQLCatalogTarget (dict) --
Specifies a target that uses Oracle SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
PostgreSQLCatalogTarget (dict) --
Specifies a target that uses Postgres SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
{'JobRun': {'ExecutionClass': 'FLEX | STANDARD', 'JobRunState': {'WAITING', 'ERROR'}}}
Retrieves the metadata for a given job run.
See also: AWS API Documentation
Request Syntax
client.get_job_run( JobName='string', RunId='string', PredecessorsIncluded=True|False )
string
[REQUIRED]
Name of the job definition being run.
string
[REQUIRED]
The ID of the job run.
boolean
True if a list of predecessor runs should be returned.
dict
Response Syntax
{ 'JobRun': { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' } }
Response Structure
(dict) --
JobRun (dict) --
The requested job-run metadata.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
{'JobRuns': {'ExecutionClass': 'FLEX | STANDARD', 'JobRunState': {'WAITING', 'ERROR'}}}
Retrieves metadata for all runs of a given job definition.
See also: AWS API Documentation
Request Syntax
client.get_job_runs( JobName='string', NextToken='string', MaxResults=123 )
string
[REQUIRED]
The name of the job definition for which to retrieve all job runs.
string
A continuation token, if this is a continuation call.
integer
The maximum size of the response.
dict
Response Syntax
{ 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
JobRuns (list) --
A list of job-run metadata objects.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
NextToken (string) --
A continuation token, if not all requested job runs have been returned.
{'Jobs': {'ExecutionClass': 'FLEX | STANDARD'}}
Retrieves all current job definitions.
See also: AWS API Documentation
Request Syntax
client.get_jobs( NextToken='string', MaxResults=123 )
string
A continuation token, if this is a continuation call.
integer
The maximum size of the response.
dict
Response Syntax
{ 'Jobs': [ { 'Name': 'string', 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' } } }, 'ExecutionClass': 'FLEX'|'STANDARD' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
Jobs (list) --
A list of job definitions.
(dict) --
Specifies a job definition.
Name (string) --
The name you assign to this job definition.
Description (string) --
A description of the job.
LogUri (string) --
This field is reserved for future use.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
CreatedOn (datetime) --
The time and date that this job definition was created.
LastModifiedOn (datetime) --
The last point in time when this job definition was modified.
ExecutionProperty (dict) --
An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.
MaxConcurrentRuns (integer) --
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
Command (dict) --
The JobCommand that runs this job.
Name (string) --
The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell . For an Apache Spark streaming ETL job, this must be gluestreaming .
ScriptLocation (string) --
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
PythonVersion (string) --
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
DefaultArguments (dict) --
The default arguments for this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
NonOverridableArguments (dict) --
Non-overridable arguments for this job, specified as name-value pairs.
(string) --
(string) --
Connections (dict) --
The connections used for this job.
Connections (list) --
A list of connections used by the job.
(string) --
MaxRetries (integer) --
The maximum number of times to retry this job after a JobRun fails.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to runs of this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Timeout (integer) --
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job.
NotificationProperty (dict) --
Specifies configuration properties of a job notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
CodeGenConfigurationNodes (dict) --
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
(string) --
(dict) --
CodeGenConfigurationNode enumerates all valid Node types. One and only one of its member variables can be populated.
AthenaConnectorSource (dict) --
Specifies a connector to an Amazon Athena data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
ConnectionTable (string) --
The name of the table in the data source.
SchemaName (string) --
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output .
OutputSchemas (list) --
Specifies the data schema for the custom Athena source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
JDBCConnectorSource (dict) --
Specifies a connector to a JDBC data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
FilterPredicate (string) --
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate .
PartitionColumn (string) --
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound , upperBound , and numPartitions . This option works the same way as in the Spark SQL JDBC reader.
LowerBound (integer) --
The minimum value of partitionColumn that is used to decide partition stride.
UpperBound (integer) --
The maximum value of partitionColumn that is used to decide partition stride.
NumPartitions (integer) --
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn .
JobBookmarkKeys (list) --
The name of the job bookmark keys on which to sort.
(string) --
JobBookmarkKeysSortOrder (string) --
Specifies an ascending or descending sort order.
DataTypeMapping (dict) --
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
(string) --
(string) --
ConnectionTable (string) --
The name of the table in the data source.
Query (string) --
The table or SQL query to get the data from. You can specify either ConnectionTable or query , but not both.
OutputSchemas (list) --
Specifies the data schema for the custom JDBC source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorSource (dict) --
Specifies a connector to an Apache Spark data source.
Name (string) --
The name of the data source.
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectorName (string) --
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies data schema for the custom spark source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogSource (dict) --
Specifies a data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
RedshiftSource (dict) --
Specifies an Amazon Redshift data store.
Name (string) --
The name of the Amazon Redshift data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
S3CatalogSource (dict) --
Specifies an Amazon S3 data store in the Glue Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
S3CsvSource (dict) --
Specifies a command-separated value (CSV) data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
Separator (string) --
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
Escaper (string) --
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is none . If enabled, the character which immediately follows is used as-is, except for a small set of well-known escapes ( \n , \r , \t , and \0 ).
QuoteChar (string) --
Specifies the character to use for quoting. The default is a double quote: '"' . Set this to -1 to turn off quoting entirely.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
WithHeader (boolean) --
A Boolean value that specifies whether to treat the first line as a header. The default value is False .
WriteHeader (boolean) --
A Boolean value that specifies whether to write the header to output. The default value is True .
SkipFirst (boolean) --
A Boolean value that specifies whether to skip the first data line. The default value is False .
OptimizePerformance (boolean) --
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
OutputSchemas (list) --
Specifies the data schema for the S3 CSV source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3JsonSource (dict) --
Specifies a JSON data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
JsonPath (string) --
A JsonPath string defining the JSON data.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
OutputSchemas (list) --
Specifies the data schema for the S3 JSON source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3ParquetSource (dict) --
Specifies an Apache Parquet data store stored in Amazon S3.
Name (string) --
The name of the data store.
Paths (list) --
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
OutputSchemas (list) --
Specifies the data schema for the S3 Parquet source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
RelationalCatalogSource (dict) --
Specifies a Relational database data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
DynamoDBCatalogSource (dict) --
Specifies a DynamoDB data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
JDBCConnectorTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of the connection that is associated with the connector.
ConnectionTable (string) --
The name of the table in the data target.
ConnectorName (string) --
The name of a connector that will be used.
ConnectionType (string) --
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the JDBC target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorTarget (dict) --
Specifies a target that uses an Apache Spark connector.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) --
The name of a connection for an Apache Spark connector.
ConnectorName (string) --
The name of an Apache Spark connector.
ConnectionType (string) --
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the custom spark target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogTarget (dict) --
Specifies a target that uses a Glue Data Catalog table.
Name (string) --
The name of your data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
Table (string) --
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
RedshiftTarget (dict) --
Specifies a target that uses Amazon Redshift.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
UpsertRedshiftOptions (dict) --
The set of options to configure an upsert operation when writing to a Redshift target.
TableLocation (string) --
The physical location of the Redshift table.
ConnectionName (string) --
The name of the connection to use to write to Redshift.
UpsertKeys (list) --
The keys used to determine whether to perform an update or insert.
(string) --
S3CatalogTarget (dict) --
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
S3GlueParquetTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
S3DirectTarget (dict) --
Specifies a data target that writes to Amazon S3.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) --
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Format (string) --
Specifies the data output format for the target.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
ApplyMapping (dict) --
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Mapping (list) --
Specifies the mapping of data property keys in the data source to data property keys in the data target.
(dict) --
Specifies the mapping of data property keys.
ToKey (string) --
After the apply mapping, what the name of the column should be. Can be the same as FromPath .
FromPath (list) --
The table or column to be modified.
(string) --
FromType (string) --
The type of the data to be modified.
ToType (string) --
The data type that the data is to be modified to.
Dropped (boolean) --
If true, then the column is removed.
Children (list) --
Only applicable to nested data structures. If you want to change the parent structure, but also one of its children, you can fill out this data strucutre. It is also Mapping , but its FromPath will be the parent's FromPath plus the FromPath from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
SelectFields (dict) --
Specifies a transform that chooses the data property keys that you want to keep.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
DropFields (dict) --
Specifies a transform that chooses the data property keys that you want to drop.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
RenameField (dict) --
Specifies a transform that renames a single data property key.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
SourcePath (list) --
A JSON path to a variable in the data structure for the source data.
(string) --
TargetPath (list) --
A JSON path to a variable in the data structure for the target data.
(string) --
Spigot (dict) --
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Path (string) --
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Topk (integer) --
Specifies a number of records to write starting from the beginning of the dataset.
Prob (float) --
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
Join (dict) --
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
JoinType (string) --
Specifies the type of join to be performed on the datasets.
Columns (list) --
A list of the two columns to be joined.
(dict) --
Specifies a column to be joined.
From (string) --
The column to be joined.
Keys (list) --
The key of the column to be joined.
(list) --
(string) --
SplitFields (dict) --
Specifies a transform that splits data property keys into two DynamicFrames . The output is a collection of DynamicFrames : one with selected data property keys, and one with the remaining data property keys.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Paths (list) --
A JSON path to a variable in the data structure.
(list) --
(string) --
SelectFromCollection (dict) --
Specifies a transform that chooses one DynamicFrame from a collection of DynamicFrames . The output is the selected DynamicFrame
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Index (integer) --
The index for the DynamicFrame to be selected.
FillMissingValues (dict) --
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
ImputedPath (string) --
A JSON path to a variable in the data structure for the dataset that is imputed.
FilledPath (string) --
A JSON path to a variable in the data structure for the dataset that is filled.
Filter (dict) --
Specifies a transform that splits a dataset into two, based on a filter condition.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
LogicalOperator (string) --
The operator used to filter rows by comparing the key value to a specified value.
Filters (list) --
Specifies a filter expression.
(dict) --
Specifies a filter expression.
Operation (string) --
The type of operation to perform in the expression.
Negated (boolean) --
Whether the expression is to be negated.
Values (list) --
A list of filter values.
(dict) --
Represents a single entry in the list of values for a FilterExpression .
Type (string) --
The type of filter value.
Value (list) --
The value to be associated.
(string) --
CustomCode (dict) --
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Code (string) --
The custom code that is used to perform the data transformation.
ClassName (string) --
The name defined for the custom code node class.
OutputSchemas (list) --
Specifies the data schema for the custom code transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkSQL (dict) --
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame .
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
(string) --
SqlQuery (string) --
A SQL query that must use Spark SQL syntax and return a single data set.
SqlAliases (list) --
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
(dict) --
Represents a single entry in the list of values for SqlAliases .
From (string) --
A table, or a column in a table.
Alias (string) --
A temporary name given to a table, or a column in a table.
OutputSchemas (list) --
Specifies the data schema for the SparkSQL transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) --
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
DirectKinesisSource (dict) --
Specifies a direct Amazon Kinesis data source.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DirectKafkaSource (dict) --
Specifies an Apache Kafka data store.
Name (string) --
The name of the data store.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKinesisSource (dict) --
Specifies a Kinesis data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKafkaSource (dict) --
Specifies an Apache Kafka data store in the Data Catalog.
Name (string) --
The name of the data store.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) --
The name of the table in the database to read from.
Database (string) --
The name of the database to read from.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DropNullFields (dict) --
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
NullCheckBoxList (dict) --
A structure that represents whether certain values are recognized as null values for removal.
IsEmpty (boolean) --
Specifies that an empty string is considered as a null value.
IsNullString (boolean) --
Specifies that a value spelling out the word 'null' is considered as a null value.
IsNegOne (boolean) --
Specifies that an integer value of -1 is considered as a null value.
NullTextList (list) --
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields transform removes custom null values only if both the value of the null placeholder and the datatype match the data.
(dict) --
Represents a custom null value such as a zeros or other value being used as a null placeholder unique to the dataset.
Value (string) --
The value of the null placeholder.
Datatype (dict) --
The datatype of the value.
Id (string) --
The datatype of the value.
Label (string) --
A label assigned to the datatype.
Merge (dict) --
Specifies a transform that merges a DynamicFrame with a staging DynamicFrame based on the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not de-duplicated.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Source (string) --
The source DynamicFrame that will be merged with a staging DynamicFrame .
PrimaryKeys (list) --
The list of primary key fields to match records from the source and staging dynamic frames.
(list) --
(string) --
Union (dict) --
Specifies a transform that combines the rows from two or more datasets into a single result.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
UnionType (string) --
Indicates the type of Union transform.
Specify ALL to join all rows from data sources to the resulting DynamicFrame. The resulting union does not remove duplicate rows.
Specify DISTINCT to remove duplicate rows in the resulting DynamicFrame.
PIIDetection (dict) --
Specifies a transform that identifies, removes or masks PII data.
Name (string) --
The name of the transform node.
Inputs (list) --
The node ID inputs to the transform.
(string) --
PiiType (string) --
Indicates the type of PIIDetection transform.
EntityTypesToDetect (list) --
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
(string) --
OutputColumnName (string) --
Indicates the output column name that will contain any entity type detected in that row.
SampleFraction (float) --
Indicates the fraction of the data to sample when scanning for PII entities.
ThresholdFraction (float) --
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
MaskValue (string) --
Indicates the value that will replace the detected entity.
Aggregate (dict) --
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
Name (string) --
The name of the transform node.
Inputs (list) --
Specifies the fields and rows to use as inputs for the aggregate transform.
(string) --
Groups (list) --
Specifies the fields to group by.
(list) --
(string) --
Aggs (list) --
Specifies the aggregate functions to be performed on specified fields.
(dict) --
Specifies the set of parameters needed to perform aggregation in the aggregate transform.
Column (list) --
Specifies the column on the data set on which the aggregation function will be applied.
(string) --
AggFunc (string) --
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
DropDuplicates (dict) --
Specifies a transform that removes rows of repeating data from a data set.
Name (string) --
The name of the transform node.
Inputs (list) --
The data inputs identified by their node names.
(string) --
Columns (list) --
The name of the columns to be merged or removed if repeating.
(list) --
(string) --
GovernedCatalogTarget (dict) --
Specifies a data target that writes to a goverened catalog.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) --
The name of the table in the database to write to.
Database (string) --
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the governed catalog.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
GovernedCatalogSource (dict) --
Specifies a data source in a goverened Data Catalog.
Name (string) --
The name of the data store.
Database (string) --
The database to read from.
Table (string) --
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
MicrosoftSQLServerCatalogSource (dict) --
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MySQLCatalogSource (dict) --
Specifies a MySQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
OracleSQLCatalogSource (dict) --
Specifies an Oracle data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
PostgreSQLCatalogSource (dict) --
Specifies a PostgresSQL data source in the Glue Data Catalog.
Name (string) --
The name of the data source.
Database (string) --
The name of the database to read from.
Table (string) --
The name of the table in the database to read from.
MicrosoftSQLServerCatalogTarget (dict) --
Specifies a target that uses Microsoft SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
MySQLCatalogTarget (dict) --
Specifies a target that uses MySQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
OracleSQLCatalogTarget (dict) --
Specifies a target that uses Oracle SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
PostgreSQLCatalogTarget (dict) --
Specifies a target that uses Postgres SQL.
Name (string) --
The name of the data target.
Inputs (list) --
The nodes that are inputs to the data target.
(string) --
Database (string) --
The name of the database to write to.
Table (string) --
The name of the table in the database to write to.
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
NextToken (string) --
A continuation token, if not all job definitions have yet been returned.
{'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'WAITING', 'ERROR'}}}}}
Retrieves the definition of a trigger.
See also: AWS API Documentation
Request Syntax
client.get_trigger( Name='string' )
string
[REQUIRED]
The name of the trigger to retrieve.
dict
Response Syntax
{ 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }
Response Structure
(dict) --
Trigger (dict) --
The requested trigger definition.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
{'Triggers': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'WAITING', 'ERROR'}}}}}
Gets all the triggers associated with a job.
See also: AWS API Documentation
Request Syntax
client.get_triggers( NextToken='string', DependentJobName='string', MaxResults=123 )
string
A continuation token, if this is a continuation call.
string
The name of the job to retrieve triggers for. The trigger that can start this job is returned, and if there is no such trigger, all triggers are returned.
integer
The maximum size of the response.
dict
Response Syntax
{ 'Triggers': [ { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
Triggers (list) --
A list of triggers for the specified job.
(dict) --
Information about a specific trigger.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
NextToken (string) --
A continuation token, if not all the requested triggers have yet been returned.
{'Workflow': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'LastRun': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'Statistics': {'ErroredActions': 'integer', 'WaitingActions': 'integer'}}}}
Retrieves resource metadata for a workflow.
See also: AWS API Documentation
Request Syntax
client.get_workflow( Name='string', IncludeGraph=True|False )
string
[REQUIRED]
The name of the workflow to retrieve.
boolean
Specifies whether to include a graph when returning the workflow resource metadata.
dict
Response Syntax
{ 'Workflow': { 'Name': 'string', 'Description': 'string', 'DefaultRunProperties': { 'string': 'string' }, 'CreatedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'LastRun': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'MaxConcurrentRuns': 123, 'BlueprintDetails': { 'BlueprintName': 'string', 'RunId': 'string' } } }
Response Structure
(dict) --
Workflow (dict) --
The resource metadata for the workflow.
Name (string) --
The name of the workflow.
Description (string) --
A description of the workflow.
DefaultRunProperties (dict) --
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
(string) --
(string) --
CreatedOn (datetime) --
The date and time when the workflow was created.
LastModifiedOn (datetime) --
The date and time when the workflow was last modified.
LastRun (dict) --
The information about the last execution of the workflow.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
MaxConcurrentRuns (integer) --
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails (dict) --
This structure indicates the details of the blueprint that this particular workflow is created from.
BlueprintName (string) --
The name of the blueprint.
RunId (string) --
The run ID for this blueprint.
{'Run': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'Statistics': {'ErroredActions': 'integer', 'WaitingActions': 'integer'}}}
Retrieves the metadata for a given workflow run.
See also: AWS API Documentation
Request Syntax
client.get_workflow_run( Name='string', RunId='string', IncludeGraph=True|False )
string
[REQUIRED]
Name of the workflow being run.
string
[REQUIRED]
The ID of the workflow run.
boolean
Specifies whether to include the workflow graph in response or not.
dict
Response Syntax
{ 'Run': { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }
Response Structure
(dict) --
Run (dict) --
The requested workflow run metadata.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
{'Runs': {'Graph': {'Nodes': {'CrawlerDetails': {'Crawls': {'State': {'ERROR'}}}, 'JobDetails': {'JobRuns': {'ExecutionClass': 'FLEX ' '| ' 'STANDARD', 'JobRunState': {'ERROR', 'WAITING'}}}, 'TriggerDetails': {'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}}}, 'Statistics': {'ErroredActions': 'integer', 'WaitingActions': 'integer'}}}
Retrieves metadata for all runs of a given workflow.
See also: AWS API Documentation
Request Syntax
client.get_workflow_runs( Name='string', IncludeGraph=True|False, NextToken='string', MaxResults=123 )
string
[REQUIRED]
Name of the workflow whose metadata of runs should be returned.
boolean
Specifies whether to include the workflow graph in response or not.
string
The maximum size of the response.
integer
The maximum number of workflow runs to be included in the response.
dict
Response Syntax
{ 'Runs': [ { 'Name': 'string', 'WorkflowRunId': 'string', 'PreviousRunId': 'string', 'WorkflowRunProperties': { 'string': 'string' }, 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'Status': 'RUNNING'|'COMPLETED'|'STOPPING'|'STOPPED'|'ERROR', 'ErrorMessage': 'string', 'Statistics': { 'TotalActions': 123, 'TimeoutActions': 123, 'FailedActions': 123, 'StoppedActions': 123, 'SucceededActions': 123, 'RunningActions': 123, 'ErroredActions': 123, 'WaitingActions': 123 }, 'Graph': { 'Nodes': [ { 'Type': 'CRAWLER'|'JOB'|'TRIGGER', 'Name': 'string', 'UniqueId': 'string', 'TriggerDetails': { 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }, 'JobDetails': { 'JobRuns': [ { 'Id': 'string', 'Attempt': 123, 'PreviousRunId': 'string', 'TriggerName': 'string', 'JobName': 'string', 'StartedOn': datetime(2015, 1, 1), 'LastModifiedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'JobRunState': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'Arguments': { 'string': 'string' }, 'ErrorMessage': 'string', 'PredecessorRuns': [ { 'JobName': 'string', 'RunId': 'string' }, ], 'AllocatedCapacity': 123, 'ExecutionTime': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'LogGroupName': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'DPUSeconds': 123.0, 'ExecutionClass': 'FLEX'|'STANDARD' }, ] }, 'CrawlerDetails': { 'Crawls': [ { 'State': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR', 'StartedOn': datetime(2015, 1, 1), 'CompletedOn': datetime(2015, 1, 1), 'ErrorMessage': 'string', 'LogGroup': 'string', 'LogStream': 'string' }, ] } }, ], 'Edges': [ { 'SourceId': 'string', 'DestinationId': 'string' }, ] }, 'StartingEventBatchCondition': { 'BatchSize': 123, 'BatchWindow': 123 } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
Runs (list) --
A list of workflow run metadata objects.
(dict) --
A workflow run is an execution of a workflow providing all the runtime information.
Name (string) --
Name of the workflow that was run.
WorkflowRunId (string) --
The ID of this workflow run.
PreviousRunId (string) --
The ID of the previous workflow run.
WorkflowRunProperties (dict) --
The workflow run properties which were set during the run.
(string) --
(string) --
StartedOn (datetime) --
The date and time when the workflow run was started.
CompletedOn (datetime) --
The date and time when the workflow run completed.
Status (string) --
The status of the workflow run.
ErrorMessage (string) --
This error message describes any error that may have occurred in starting the workflow run. Currently the only error message is "Concurrent runs exceeded for workflow: foo ."
Statistics (dict) --
The statistics of the run.
TotalActions (integer) --
Total number of Actions in the workflow run.
TimeoutActions (integer) --
Total number of Actions that timed out.
FailedActions (integer) --
Total number of Actions that have failed.
StoppedActions (integer) --
Total number of Actions that have stopped.
SucceededActions (integer) --
Total number of Actions that have succeeded.
RunningActions (integer) --
Total number Actions in running state.
ErroredActions (integer) --
Indicates the count of job runs in the ERROR state in the workflow run.
WaitingActions (integer) --
Indicates the count of job runs in WAITING state in the workflow run.
Graph (dict) --
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Nodes (list) --
A list of the the Glue components belong to the workflow represented as nodes.
(dict) --
A node represents an Glue component (trigger, crawler, or job) on a workflow graph.
Type (string) --
The type of Glue component represented by the node.
Name (string) --
The name of the Glue component represented by the node.
UniqueId (string) --
The unique Id assigned to the node within the workflow.
TriggerDetails (dict) --
Details of the Trigger when the node represents a Trigger.
Trigger (dict) --
The information of the trigger represented by the trigger node.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
JobDetails (dict) --
Details of the Job when the node represents a Job.
JobRuns (list) --
The information for the job runs represented by the job node.
(dict) --
Contains information about a job run.
Id (string) --
The ID of this job run.
Attempt (integer) --
The number of the attempt to run this job.
PreviousRunId (string) --
The ID of the previous run of this job. For example, the JobRunId specified in the StartJobRun action.
TriggerName (string) --
The name of the trigger that started this job run.
JobName (string) --
The name of the job definition being used in this run.
StartedOn (datetime) --
The date and time at which this job run was started.
LastModifiedOn (datetime) --
The last time that this job run was modified.
CompletedOn (datetime) --
The date and time that this job run completed.
JobRunState (string) --
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Arguments (dict) --
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
ErrorMessage (string) --
An error message associated with this job run.
PredecessorRuns (list) --
A list of predecessors to this job run.
(dict) --
A job run that was used in the predicate of a conditional trigger that triggered this job run.
JobName (string) --
The name of the job definition used by the predecessor job run.
RunId (string) --
The job-run ID of the predecessor job run.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
ExecutionTime (integer) --
The amount of time (in seconds) that the job run consumed resources.
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
MaxCapacity (float) --
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job run.
LogGroupName (string) --
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS. This name can be /aws-glue/jobs/ , in which case the default encryption is NONE . If you add a role name and SecurityConfiguration name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/ ), then that security configuration is used to encrypt the log group.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
DPUSeconds (float) --
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X , 2 for G.2X , or 0.25 for G.025X workers). This value may be different than the executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the number of executors running at a given time may be less than the MaxCapacity . Therefore, it is possible that the value of DPUSeconds is less than executionEngineRuntime * MaxCapacity .
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
CrawlerDetails (dict) --
Details of the crawler when the node represents a crawler.
Crawls (list) --
A list of crawls represented by the crawl node.
(dict) --
The details of a crawl in the workflow.
State (string) --
The state of the crawler.
StartedOn (datetime) --
The date and time on which the crawl started.
CompletedOn (datetime) --
The date and time on which the crawl completed.
ErrorMessage (string) --
The error message associated with the crawl.
LogGroup (string) --
The log group associated with the crawl.
LogStream (string) --
The log stream associated with the crawl.
Edges (list) --
A list of all the directed connections between the nodes belonging to the workflow.
(dict) --
An edge represents a directed connection between two Glue components that are part of the workflow the edge belongs to.
SourceId (string) --
The unique of the node within the workflow where the edge starts.
DestinationId (string) --
The unique of the node within the workflow where the edge ends.
StartingEventBatchCondition (dict) --
The batch condition that started the workflow run.
BatchSize (integer) --
Number of events in the batch.
BatchWindow (integer) --
Duration of the batch window in seconds.
NextToken (string) --
A continuation token, if not all requested workflow runs have been returned.
{'ExecutionClass': 'FLEX | STANDARD'}
Starts a job run using a job definition.
See also: AWS API Documentation
Request Syntax
client.start_job_run( JobName='string', JobRunId='string', Arguments={ 'string': 'string' }, AllocatedCapacity=123, Timeout=123, MaxCapacity=123.0, SecurityConfiguration='string', NotificationProperty={ 'NotifyDelayAfter': 123 }, WorkerType='Standard'|'G.1X'|'G.2X'|'G.025X', NumberOfWorkers=123, ExecutionClass='FLEX'|'STANDARD' )
string
[REQUIRED]
The name of the job definition to use.
string
The ID of a previous JobRun to retry.
dict
The job arguments specifically for this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
integer
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
integer
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
float
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
string
The name of the SecurityConfiguration structure to be used with this job run.
dict
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
string
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker.
For the G.2X worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
integer
The number of workers of a defined workerType that are allocated when a job runs.
string
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
dict
Response Syntax
{ 'JobRunId': 'string' }
Response Structure
(dict) --
JobRunId (string) --
The ID assigned to this job run.
{'JobUpdate': {'ExecutionClass': 'FLEX | STANDARD'}}
Updates an existing job definition. The previous job definition is completely overwritten by this information.
See also: AWS API Documentation
Request Syntax
client.update_job( JobName='string', JobUpdate={ 'Description': 'string', 'LogUri': 'string', 'Role': 'string', 'ExecutionProperty': { 'MaxConcurrentRuns': 123 }, 'Command': { 'Name': 'string', 'ScriptLocation': 'string', 'PythonVersion': 'string' }, 'DefaultArguments': { 'string': 'string' }, 'NonOverridableArguments': { 'string': 'string' }, 'Connections': { 'Connections': [ 'string', ] }, 'MaxRetries': 123, 'AllocatedCapacity': 123, 'Timeout': 123, 'MaxCapacity': 123.0, 'WorkerType': 'Standard'|'G.1X'|'G.2X'|'G.025X', 'NumberOfWorkers': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'GlueVersion': 'string', 'CodeGenConfigurationNodes': { 'string': { 'AthenaConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'ConnectionTable': 'string', 'SchemaName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'JDBCConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'FilterPredicate': 'string', 'PartitionColumn': 'string', 'LowerBound': 123, 'UpperBound': 123, 'NumPartitions': 123, 'JobBookmarkKeys': [ 'string', ], 'JobBookmarkKeysSortOrder': 'string', 'DataTypeMapping': { 'string': 'DATE'|'STRING'|'TIMESTAMP'|'INT'|'FLOAT'|'LONG'|'BIGDECIMAL'|'BYTE'|'SHORT'|'DOUBLE' } }, 'ConnectionTable': 'string', 'Query': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorSource': { 'Name': 'string', 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'RedshiftSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string' }, 'S3CatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'S3CsvSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'Separator': 'comma'|'ctrla'|'pipe'|'semicolon'|'tab', 'Escaper': 'string', 'QuoteChar': 'quote'|'quillemet'|'single_quote'|'disabled', 'Multiline': True|False, 'WithHeader': True|False, 'WriteHeader': True|False, 'SkipFirst': True|False, 'OptimizePerformance': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3JsonSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'gzip'|'bzip2', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'JsonPath': 'string', 'Multiline': True|False, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'S3ParquetSource': { 'Name': 'string', 'Paths': [ 'string', ], 'CompressionType': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'Exclusions': [ 'string', ], 'GroupSize': 'string', 'GroupFiles': 'string', 'Recurse': True|False, 'MaxBand': 123, 'MaxFilesInBand': 123, 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123, 'EnableSamplePath': True|False, 'SamplePath': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'RelationalCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'DynamoDBCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'JDBCConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectionTable': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkConnectorTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'ConnectionName': 'string', 'ConnectorName': 'string', 'ConnectionType': 'string', 'AdditionalOptions': { 'string': 'string' }, 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'RedshiftTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string', 'RedshiftTmpDir': 'string', 'TmpDirIAMRole': 'string', 'UpsertRedshiftOptions': { 'TableLocation': 'string', 'ConnectionName': 'string', 'UpsertKeys': [ 'string', ] } }, 'S3CatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'S3GlueParquetTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'snappy'|'lzo'|'gzip'|'uncompressed'|'none', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'S3DirectTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Path': 'string', 'Compression': 'string', 'Format': 'json'|'csv'|'avro'|'orc'|'parquet', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG', 'Table': 'string', 'Database': 'string' } }, 'ApplyMapping': { 'Name': 'string', 'Inputs': [ 'string', ], 'Mapping': [ { 'ToKey': 'string', 'FromPath': [ 'string', ], 'FromType': 'string', 'ToType': 'string', 'Dropped': True|False, 'Children': {'... recursive ...'} }, ] }, 'SelectFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'DropFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'RenameField': { 'Name': 'string', 'Inputs': [ 'string', ], 'SourcePath': [ 'string', ], 'TargetPath': [ 'string', ] }, 'Spigot': { 'Name': 'string', 'Inputs': [ 'string', ], 'Path': 'string', 'Topk': 123, 'Prob': 123.0 }, 'Join': { 'Name': 'string', 'Inputs': [ 'string', ], 'JoinType': 'equijoin'|'left'|'right'|'outer'|'leftsemi'|'leftanti', 'Columns': [ { 'From': 'string', 'Keys': [ [ 'string', ], ] }, ] }, 'SplitFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'Paths': [ [ 'string', ], ] }, 'SelectFromCollection': { 'Name': 'string', 'Inputs': [ 'string', ], 'Index': 123 }, 'FillMissingValues': { 'Name': 'string', 'Inputs': [ 'string', ], 'ImputedPath': 'string', 'FilledPath': 'string' }, 'Filter': { 'Name': 'string', 'Inputs': [ 'string', ], 'LogicalOperator': 'AND'|'OR', 'Filters': [ { 'Operation': 'EQ'|'LT'|'GT'|'LTE'|'GTE'|'REGEX'|'ISNULL', 'Negated': True|False, 'Values': [ { 'Type': 'COLUMNEXTRACTED'|'CONSTANT', 'Value': [ 'string', ] }, ] }, ] }, 'CustomCode': { 'Name': 'string', 'Inputs': [ 'string', ], 'Code': 'string', 'ClassName': 'string', 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'SparkSQL': { 'Name': 'string', 'Inputs': [ 'string', ], 'SqlQuery': 'string', 'SqlAliases': [ { 'From': 'string', 'Alias': 'string' }, ], 'OutputSchemas': [ { 'Columns': [ { 'Name': 'string', 'Type': 'string' }, ] }, ] }, 'DirectKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DirectKafkaSource': { 'Name': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'WindowSize': 123, 'DetectSchema': True|False, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKinesisSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'EndpointUrl': 'string', 'StreamName': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingPosition': 'latest'|'trim_horizon'|'earliest', 'MaxFetchTimeInMs': 123, 'MaxFetchRecordsPerShard': 123, 'MaxRecordPerRead': 123, 'AddIdleTimeBetweenReads': True|False, 'IdleTimeBetweenReadsInMs': 123, 'DescribeShardInterval': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxRetryIntervalMs': 123, 'AvoidEmptyBatches': True|False, 'StreamArn': 'string', 'RoleArn': 'string', 'RoleSessionName': 'string' }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'CatalogKafkaSource': { 'Name': 'string', 'WindowSize': 123, 'DetectSchema': True|False, 'Table': 'string', 'Database': 'string', 'StreamingOptions': { 'BootstrapServers': 'string', 'SecurityProtocol': 'string', 'ConnectionName': 'string', 'TopicName': 'string', 'Assign': 'string', 'SubscribePattern': 'string', 'Classification': 'string', 'Delimiter': 'string', 'StartingOffsets': 'string', 'EndingOffsets': 'string', 'PollTimeoutMs': 123, 'NumRetries': 123, 'RetryIntervalMs': 123, 'MaxOffsetsPerTrigger': 123, 'MinPartitions': 123 }, 'DataPreviewOptions': { 'PollingTime': 123, 'RecordPollingLimit': 123 } }, 'DropNullFields': { 'Name': 'string', 'Inputs': [ 'string', ], 'NullCheckBoxList': { 'IsEmpty': True|False, 'IsNullString': True|False, 'IsNegOne': True|False }, 'NullTextList': [ { 'Value': 'string', 'Datatype': { 'Id': 'string', 'Label': 'string' } }, ] }, 'Merge': { 'Name': 'string', 'Inputs': [ 'string', ], 'Source': 'string', 'PrimaryKeys': [ [ 'string', ], ] }, 'Union': { 'Name': 'string', 'Inputs': [ 'string', ], 'UnionType': 'ALL'|'DISTINCT' }, 'PIIDetection': { 'Name': 'string', 'Inputs': [ 'string', ], 'PiiType': 'RowAudit'|'RowMasking'|'ColumnAudit'|'ColumnMasking', 'EntityTypesToDetect': [ 'string', ], 'OutputColumnName': 'string', 'SampleFraction': 123.0, 'ThresholdFraction': 123.0, 'MaskValue': 'string' }, 'Aggregate': { 'Name': 'string', 'Inputs': [ 'string', ], 'Groups': [ [ 'string', ], ], 'Aggs': [ { 'Column': [ 'string', ], 'AggFunc': 'avg'|'countDistinct'|'count'|'first'|'last'|'kurtosis'|'max'|'min'|'skewness'|'stddev_samp'|'stddev_pop'|'sum'|'sumDistinct'|'var_samp'|'var_pop' }, ] }, 'DropDuplicates': { 'Name': 'string', 'Inputs': [ 'string', ], 'Columns': [ [ 'string', ], ] }, 'GovernedCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'PartitionKeys': [ [ 'string', ], ], 'Table': 'string', 'Database': 'string', 'SchemaChangePolicy': { 'EnableUpdateCatalog': True|False, 'UpdateBehavior': 'UPDATE_IN_DATABASE'|'LOG' } }, 'GovernedCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string', 'PartitionPredicate': 'string', 'AdditionalOptions': { 'BoundedSize': 123, 'BoundedFiles': 123 } }, 'MicrosoftSQLServerCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogSource': { 'Name': 'string', 'Database': 'string', 'Table': 'string' }, 'MicrosoftSQLServerCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'MySQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'OracleSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' }, 'PostgreSQLCatalogTarget': { 'Name': 'string', 'Inputs': [ 'string', ], 'Database': 'string', 'Table': 'string' } } }, 'ExecutionClass': 'FLEX'|'STANDARD' } )
string
[REQUIRED]
The name of the job definition to update.
dict
[REQUIRED]
Specifies the values with which to update the job definition. Unspecified configuration is removed or reset to default values.
Description (string) --
Description of the job being defined.
LogUri (string) --
This field is reserved for future use.
Role (string) --
The name or Amazon Resource Name (ARN) of the IAM role associated with this job (required).
ExecutionProperty (dict) --
An ExecutionProperty specifying the maximum number of concurrent runs allowed for this job.
MaxConcurrentRuns (integer) --
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
Command (dict) --
The JobCommand that runs this job (required).
Name (string) --
The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell . For an Apache Spark streaming ETL job, this must be gluestreaming .
ScriptLocation (string) --
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
PythonVersion (string) --
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
DefaultArguments (dict) --
The default arguments for this job.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
NonOverridableArguments (dict) --
Non-overridable arguments for this job, specified as name-value pairs.
(string) --
(string) --
Connections (dict) --
The connections used for this job.
Connections (list) --
A list of connections used by the job.
(string) --
MaxRetries (integer) --
The maximum number of times to retry this job if it fails.
AllocatedCapacity (integer) --
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Timeout (integer) --
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours).
MaxCapacity (float) --
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers .
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job or an Apache Spark ETL job:
When you specify a Python shell job ( JobCommand.Name ="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job ( JobCommand.Name ="glueetl") or Apache Spark streaming ETL job ( JobCommand.Name ="gluestreaming"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity . Instead, you should specify a Worker type and the Number of workers .
WorkerType (string) --
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
For the Standard worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
NumberOfWorkers (integer) --
The number of workers of a defined workerType that are allocated when a job runs.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this job.
NotificationProperty (dict) --
Specifies the configuration properties of a job notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
GlueVersion (string) --
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
CodeGenConfigurationNodes (dict) --
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
(string) --
(dict) --
CodeGenConfigurationNode enumerates all valid Node types. One and only one of its member variables can be populated.
AthenaConnectorSource (dict) --
Specifies a connector to an Amazon Athena data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
ConnectionTable (string) --
The name of the table in the data source.
SchemaName (string) -- [REQUIRED]
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output .
OutputSchemas (list) --
Specifies the data schema for the custom Athena source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
JDBCConnectorSource (dict) --
Specifies a connector to a JDBC data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
FilterPredicate (string) --
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified filterPredicate .
PartitionColumn (string) --
The name of an integer column that is used for partitioning. This option works only when it's included with lowerBound , upperBound , and numPartitions . This option works the same way as in the Spark SQL JDBC reader.
LowerBound (integer) --
The minimum value of partitionColumn that is used to decide partition stride.
UpperBound (integer) --
The maximum value of partitionColumn that is used to decide partition stride.
NumPartitions (integer) --
The number of partitions. This value, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions that are used to split the partitionColumn .
JobBookmarkKeys (list) --
The name of the job bookmark keys on which to sort.
(string) --
JobBookmarkKeysSortOrder (string) --
Specifies an ascending or descending sort order.
DataTypeMapping (dict) --
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the option "dataTypeMapping":{"FLOAT":"STRING"} maps data fields of JDBC type FLOAT into the Java String type by calling the ResultSet.getString() method of the driver, and uses it to build the Glue record. The ResultSet object is implemented by each driver, so the behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the driver performs the conversions.
(string) --
(string) --
ConnectionTable (string) --
The name of the table in the data source.
Query (string) --
The table or SQL query to get the data from. You can specify either ConnectionTable or query , but not both.
OutputSchemas (list) --
Specifies the data schema for the custom JDBC source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorSource (dict) --
Specifies a connector to an Apache Spark data source.
Name (string) -- [REQUIRED]
The name of the data source.
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectorName (string) -- [REQUIRED]
The name of a connector that assists with accessing the data store in Glue Studio.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies data schema for the custom spark source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogSource (dict) --
Specifies a data store in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
RedshiftSource (dict) --
Specifies an Amazon Redshift data store.
Name (string) -- [REQUIRED]
The name of the Amazon Redshift data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
S3CatalogSource (dict) --
Specifies an Amazon S3 data store in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
S3CsvSource (dict) --
Specifies a command-separated value (CSV) data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
Separator (string) -- [REQUIRED]
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
Escaper (string) --
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is none . If enabled, the character which immediately follows is used as-is, except for a small set of well-known escapes ( \n , \r , \t , and \0 ).
QuoteChar (string) -- [REQUIRED]
Specifies the character to use for quoting. The default is a double quote: '"' . Set this to -1 to turn off quoting entirely.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
WithHeader (boolean) --
A Boolean value that specifies whether to treat the first line as a header. The default value is False .
WriteHeader (boolean) --
A Boolean value that specifies whether to write the header to output. The default value is True .
SkipFirst (boolean) --
A Boolean value that specifies whether to skip the first data line. The default value is False .
OptimizePerformance (boolean) --
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
OutputSchemas (list) --
Specifies the data schema for the S3 CSV source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3JsonSource (dict) --
Specifies a JSON data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
JsonPath (string) --
A JsonPath string defining the JSON data.
Multiline (boolean) --
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The default value is False , which allows for more aggressive file-splitting during parsing.
OutputSchemas (list) --
Specifies the data schema for the S3 JSON source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
S3ParquetSource (dict) --
Specifies an Apache Parquet data store stored in Amazon S3.
Name (string) -- [REQUIRED]
The name of the data store.
Paths (list) -- [REQUIRED]
A list of the Amazon S3 paths to read from.
(string) --
CompressionType (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Exclusions (list) --
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "["**.pdf"]" excludes all PDF files.
(string) --
GroupSize (string) --
The target group size in bytes. The default is computed based on the input data size and the size of your cluster. When there are fewer than 50,000 input files, "groupFiles" must be set to "inPartition" for this to take effect.
GroupFiles (string) --
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000 files, set this parameter to "none" .
Recurse (boolean) --
If set to true, recursively reads files in all subdirectories under the specified paths.
MaxBand (integer) --
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
MaxFilesInBand (integer) --
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
EnableSamplePath (boolean) --
Sets option to enable a sample path.
SamplePath (string) --
If enabled, specifies the sample path.
OutputSchemas (list) --
Specifies the data schema for the S3 Parquet source.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
RelationalCatalogSource (dict) --
Specifies a Relational database data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
DynamoDBCatalogSource (dict) --
Specifies a DynamoDB data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
JDBCConnectorTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) -- [REQUIRED]
The name of the connection that is associated with the connector.
ConnectionTable (string) -- [REQUIRED]
The name of the table in the data target.
ConnectorName (string) -- [REQUIRED]
The name of a connector that will be used.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the JDBC target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkConnectorTarget (dict) --
Specifies a target that uses an Apache Spark connector.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
ConnectionName (string) -- [REQUIRED]
The name of a connection for an Apache Spark connector.
ConnectorName (string) -- [REQUIRED]
The name of an Apache Spark connector.
ConnectionType (string) -- [REQUIRED]
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
AdditionalOptions (dict) --
Additional connection options for the connector.
(string) --
(string) --
OutputSchemas (list) --
Specifies the data schema for the custom spark target.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
CatalogTarget (dict) --
Specifies a target that uses a Glue Data Catalog table.
Name (string) -- [REQUIRED]
The name of your data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
Table (string) -- [REQUIRED]
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
RedshiftTarget (dict) --
Specifies a target that uses Amazon Redshift.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
RedshiftTmpDir (string) --
The Amazon S3 path where temporary data can be staged when copying out of the database.
TmpDirIAMRole (string) --
The IAM role with permissions.
UpsertRedshiftOptions (dict) --
The set of options to configure an upsert operation when writing to a Redshift target.
TableLocation (string) --
The physical location of the Redshift table.
ConnectionName (string) --
The name of the connection to use to write to Redshift.
UpsertKeys (list) --
The keys used to determine whether to perform an update or insert.
(string) --
S3CatalogTarget (dict) --
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
Database (string) -- [REQUIRED]
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
S3GlueParquetTarget (dict) --
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) -- [REQUIRED]
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
S3DirectTarget (dict) --
Specifies a data target that writes to Amazon S3.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Path (string) -- [REQUIRED]
A single Amazon S3 path to write to.
Compression (string) --
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension. Possible values are "gzip" and "bzip" ).
Format (string) -- [REQUIRED]
Specifies the data output format for the target.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the crawler.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
Table (string) --
Specifies the table in the database that the schema change policy applies to.
Database (string) --
Specifies the database that the schema change policy applies to.
ApplyMapping (dict) --
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Mapping (list) -- [REQUIRED]
Specifies the mapping of data property keys in the data source to data property keys in the data target.
(dict) --
Specifies the mapping of data property keys.
ToKey (string) --
After the apply mapping, what the name of the column should be. Can be the same as FromPath .
FromPath (list) --
The table or column to be modified.
(string) --
FromType (string) --
The type of the data to be modified.
ToType (string) --
The data type that the data is to be modified to.
Dropped (boolean) --
If true, then the column is removed.
Children (list) --
Only applicable to nested data structures. If you want to change the parent structure, but also one of its children, you can fill out this data strucutre. It is also Mapping , but its FromPath will be the parent's FromPath plus the FromPath from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
SelectFields (dict) --
Specifies a transform that chooses the data property keys that you want to keep.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
DropFields (dict) --
Specifies a transform that chooses the data property keys that you want to drop.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
RenameField (dict) --
Specifies a transform that renames a single data property key.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
SourcePath (list) -- [REQUIRED]
A JSON path to a variable in the data structure for the source data.
(string) --
TargetPath (list) -- [REQUIRED]
A JSON path to a variable in the data structure for the target data.
(string) --
Spigot (dict) --
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Path (string) -- [REQUIRED]
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Topk (integer) --
Specifies a number of records to write starting from the beginning of the dataset.
Prob (float) --
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
Join (dict) --
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
JoinType (string) -- [REQUIRED]
Specifies the type of join to be performed on the datasets.
Columns (list) -- [REQUIRED]
A list of the two columns to be joined.
(dict) --
Specifies a column to be joined.
From (string) -- [REQUIRED]
The column to be joined.
Keys (list) -- [REQUIRED]
The key of the column to be joined.
(list) --
(string) --
SplitFields (dict) --
Specifies a transform that splits data property keys into two DynamicFrames . The output is a collection of DynamicFrames : one with selected data property keys, and one with the remaining data property keys.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Paths (list) -- [REQUIRED]
A JSON path to a variable in the data structure.
(list) --
(string) --
SelectFromCollection (dict) --
Specifies a transform that chooses one DynamicFrame from a collection of DynamicFrames . The output is the selected DynamicFrame
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Index (integer) -- [REQUIRED]
The index for the DynamicFrame to be selected.
FillMissingValues (dict) --
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
ImputedPath (string) -- [REQUIRED]
A JSON path to a variable in the data structure for the dataset that is imputed.
FilledPath (string) --
A JSON path to a variable in the data structure for the dataset that is filled.
Filter (dict) --
Specifies a transform that splits a dataset into two, based on a filter condition.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
LogicalOperator (string) -- [REQUIRED]
The operator used to filter rows by comparing the key value to a specified value.
Filters (list) -- [REQUIRED]
Specifies a filter expression.
(dict) --
Specifies a filter expression.
Operation (string) -- [REQUIRED]
The type of operation to perform in the expression.
Negated (boolean) --
Whether the expression is to be negated.
Values (list) -- [REQUIRED]
A list of filter values.
(dict) --
Represents a single entry in the list of values for a FilterExpression .
Type (string) -- [REQUIRED]
The type of filter value.
Value (list) -- [REQUIRED]
The value to be associated.
(string) --
CustomCode (dict) --
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Code (string) -- [REQUIRED]
The custom code that is used to perform the data transformation.
ClassName (string) -- [REQUIRED]
The name defined for the custom code node class.
OutputSchemas (list) --
Specifies the data schema for the custom code transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
SparkSQL (dict) --
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a single DynamicFrame .
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
(string) --
SqlQuery (string) -- [REQUIRED]
A SQL query that must use Spark SQL syntax and return a single data set.
SqlAliases (list) -- [REQUIRED]
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify From as MyDataSource, and Alias as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
(dict) --
Represents a single entry in the list of values for SqlAliases .
From (string) -- [REQUIRED]
A table, or a column in a table.
Alias (string) -- [REQUIRED]
A temporary name given to a table, or a column in a table.
OutputSchemas (list) --
Specifies the data schema for the SparkSQL transform.
(dict) --
Specifies a user-defined schema when a schema cannot be determined by AWS Glue.
Columns (list) --
Specifies the column definitions that make up a Glue schema.
(dict) --
Specifies a single column in a Glue schema definition.
Name (string) -- [REQUIRED]
The name of the column in the Glue Studio schema.
Type (string) --
The hive type for this column in the Glue Studio schema.
DirectKinesisSource (dict) --
Specifies a direct Amazon Kinesis data source.
Name (string) -- [REQUIRED]
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DirectKafkaSource (dict) --
Specifies an Apache Kafka data store.
Name (string) -- [REQUIRED]
The name of the data store.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKinesisSource (dict) --
Specifies a Kinesis data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
Database (string) -- [REQUIRED]
The name of the database to read from.
StreamingOptions (dict) --
Additional options for the Kinesis streaming data source.
EndpointUrl (string) --
The URL of the Kinesis endpoint.
StreamName (string) --
The name of the Kinesis data stream.
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingPosition (string) --
The starting position in the Kinesis data stream to read data from. The possible values are "latest" , "trim_horizon" , or "earliest" . The default value is "latest" .
MaxFetchTimeInMs (integer) --
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in milliseconds (ms). The default value is 1000 .
MaxFetchRecordsPerShard (integer) --
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is 100000 .
MaxRecordPerRead (integer) --
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default value is 10000 .
AddIdleTimeBetweenReads (boolean) --
Adds a time delay between two consecutive getRecords operations. The default value is "False" . This option is only configurable for Glue version 2.0 and above.
IdleTimeBetweenReadsInMs (integer) --
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is 1000 . This option is only configurable for Glue version 2.0 and above.
DescribeShardInterval (integer) --
The minimum time interval between two ListShards API calls for your script to consider resharding. The default value is 1s .
NumRetries (integer) --
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3 .
RetryIntervalMs (integer) --
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value is 1000 .
MaxRetryIntervalMs (integer) --
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The default value is 10000 .
AvoidEmptyBatches (boolean) --
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch is started. The default value is "False" .
StreamArn (string) --
The Amazon Resource Name (ARN) of the Kinesis data stream.
RoleArn (string) --
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName" .
RoleSessionName (string) --
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data stream in a different account. Used in conjunction with "awsSTSRoleARN" .
DataPreviewOptions (dict) --
Additional options for data preview.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
CatalogKafkaSource (dict) --
Specifies an Apache Kafka data store in the Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
WindowSize (integer) --
The amount of time to spend processing each micro batch.
DetectSchema (boolean) --
Whether to automatically determine the schema from the incoming data.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
Database (string) -- [REQUIRED]
The name of the database to read from.
StreamingOptions (dict) --
Specifies the streaming options.
BootstrapServers (string) --
A list of bootstrap server URLs, for example, as b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094 . This option must be specified in the API call or defined in the table metadata in the Data Catalog.
SecurityProtocol (string) --
The protocol used to communicate with brokers. The possible values are "SSL" or "PLAINTEXT" .
ConnectionName (string) --
The name of the connection.
TopicName (string) --
The topic name as specified in Apache Kafka. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Assign (string) --
The specific TopicPartitions to consume. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
SubscribePattern (string) --
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of "topicName" , "assign" or "subscribePattern" .
Classification (string) --
An optional classification.
Delimiter (string) --
Specifies the delimiter character.
StartingOffsets (string) --
The starting position in the Kafka topic to read data from. The possible values are "earliest" or "latest" . The default value is "latest" .
EndingOffsets (string) --
The end point when a batch query is ended. Possible values are either "latest" or a JSON string that specifies an ending offset for each TopicPartition .
PollTimeoutMs (integer) --
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512 .
NumRetries (integer) --
The number of times to retry before failing to fetch Kafka offsets. The default value is 3 .
RetryIntervalMs (integer) --
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10 .
MaxOffsetsPerTrigger (integer) --
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total number of offsets is proportionally split across topicPartitions of different volumes. The default value is null, which means that the consumer reads all offsets until the known latest offset.
MinPartitions (integer) --
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
DataPreviewOptions (dict) --
Specifies options related to data preview for viewing a sample of your data.
PollingTime (integer) --
The polling time in milliseconds.
RecordPollingLimit (integer) --
The limit to the number of records polled.
DropNullFields (dict) --
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
NullCheckBoxList (dict) --
A structure that represents whether certain values are recognized as null values for removal.
IsEmpty (boolean) --
Specifies that an empty string is considered as a null value.
IsNullString (boolean) --
Specifies that a value spelling out the word 'null' is considered as a null value.
IsNegOne (boolean) --
Specifies that an integer value of -1 is considered as a null value.
NullTextList (list) --
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields transform removes custom null values only if both the value of the null placeholder and the datatype match the data.
(dict) --
Represents a custom null value such as a zeros or other value being used as a null placeholder unique to the dataset.
Value (string) -- [REQUIRED]
The value of the null placeholder.
Datatype (dict) -- [REQUIRED]
The datatype of the value.
Id (string) -- [REQUIRED]
The datatype of the value.
Label (string) -- [REQUIRED]
A label assigned to the datatype.
Merge (dict) --
Specifies a transform that merges a DynamicFrame with a staging DynamicFrame based on the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not de-duplicated.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Source (string) -- [REQUIRED]
The source DynamicFrame that will be merged with a staging DynamicFrame .
PrimaryKeys (list) -- [REQUIRED]
The list of primary key fields to match records from the source and staging dynamic frames.
(list) --
(string) --
Union (dict) --
Specifies a transform that combines the rows from two or more datasets into a single result.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The node ID inputs to the transform.
(string) --
UnionType (string) -- [REQUIRED]
Indicates the type of Union transform.
Specify ALL to join all rows from data sources to the resulting DynamicFrame. The resulting union does not remove duplicate rows.
Specify DISTINCT to remove duplicate rows in the resulting DynamicFrame.
PIIDetection (dict) --
Specifies a transform that identifies, removes or masks PII data.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The node ID inputs to the transform.
(string) --
PiiType (string) -- [REQUIRED]
Indicates the type of PIIDetection transform.
EntityTypesToDetect (list) -- [REQUIRED]
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
(string) --
OutputColumnName (string) --
Indicates the output column name that will contain any entity type detected in that row.
SampleFraction (float) --
Indicates the fraction of the data to sample when scanning for PII entities.
ThresholdFraction (float) --
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
MaskValue (string) --
Indicates the value that will replace the detected entity.
Aggregate (dict) --
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
Specifies the fields and rows to use as inputs for the aggregate transform.
(string) --
Groups (list) -- [REQUIRED]
Specifies the fields to group by.
(list) --
(string) --
Aggs (list) -- [REQUIRED]
Specifies the aggregate functions to be performed on specified fields.
(dict) --
Specifies the set of parameters needed to perform aggregation in the aggregate transform.
Column (list) -- [REQUIRED]
Specifies the column on the data set on which the aggregation function will be applied.
(string) --
AggFunc (string) -- [REQUIRED]
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
DropDuplicates (dict) --
Specifies a transform that removes rows of repeating data from a data set.
Name (string) -- [REQUIRED]
The name of the transform node.
Inputs (list) -- [REQUIRED]
The data inputs identified by their node names.
(string) --
Columns (list) --
The name of the columns to be merged or removed if repeating.
(list) --
(string) --
GovernedCatalogTarget (dict) --
Specifies a data target that writes to a goverened catalog.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
PartitionKeys (list) --
Specifies native partitioning using a sequence of keys.
(list) --
(string) --
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
Database (string) -- [REQUIRED]
The name of the database to write to.
SchemaChangePolicy (dict) --
A policy that specifies update behavior for the governed catalog.
EnableUpdateCatalog (boolean) --
Whether to use the specified update behavior when the crawler finds a changed schema.
UpdateBehavior (string) --
The update behavior when the crawler finds a changed schema.
GovernedCatalogSource (dict) --
Specifies a data source in a goverened Data Catalog.
Name (string) -- [REQUIRED]
The name of the data store.
Database (string) -- [REQUIRED]
The database to read from.
Table (string) -- [REQUIRED]
The database table to read from.
PartitionPredicate (string) --
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not deleted. Set to "" – empty by default.
AdditionalOptions (dict) --
Specifies additional connection options.
BoundedSize (integer) --
Sets the upper limit for the target size of the dataset in bytes that will be processed.
BoundedFiles (integer) --
Sets the upper limit for the target number of files that will be processed.
MicrosoftSQLServerCatalogSource (dict) --
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
MySQLCatalogSource (dict) --
Specifies a MySQL data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
OracleSQLCatalogSource (dict) --
Specifies an Oracle data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
PostgreSQLCatalogSource (dict) --
Specifies a PostgresSQL data source in the Glue Data Catalog.
Name (string) -- [REQUIRED]
The name of the data source.
Database (string) -- [REQUIRED]
The name of the database to read from.
Table (string) -- [REQUIRED]
The name of the table in the database to read from.
MicrosoftSQLServerCatalogTarget (dict) --
Specifies a target that uses Microsoft SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
MySQLCatalogTarget (dict) --
Specifies a target that uses MySQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
OracleSQLCatalogTarget (dict) --
Specifies a target that uses Oracle SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
PostgreSQLCatalogTarget (dict) --
Specifies a target that uses Postgres SQL.
Name (string) -- [REQUIRED]
The name of the data target.
Inputs (list) -- [REQUIRED]
The nodes that are inputs to the data target.
(string) --
Database (string) -- [REQUIRED]
The name of the database to write to.
Table (string) -- [REQUIRED]
The name of the table in the database to write to.
ExecutionClass (string) --
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX . The flexible execution class is available for Spark jobs.
dict
Response Syntax
{ 'JobName': 'string' }
Response Structure
(dict) --
JobName (string) --
Returns the name of the updated job definition.
{'TriggerUpdate': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'ERROR', 'WAITING'}}}}}Response
{'Trigger': {'Predicate': {'Conditions': {'CrawlState': {'ERROR'}, 'State': {'WAITING', 'ERROR'}}}}}
Updates a trigger definition.
See also: AWS API Documentation
Request Syntax
client.update_trigger( Name='string', TriggerUpdate={ 'Name': 'string', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } )
string
[REQUIRED]
The name of the trigger to update.
dict
[REQUIRED]
The new values with which to update the trigger.
Name (string) --
Reserved for future use.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) -- [REQUIRED]
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.
dict
Response Syntax
{ 'Trigger': { 'Name': 'string', 'WorkflowName': 'string', 'Id': 'string', 'Type': 'SCHEDULED'|'CONDITIONAL'|'ON_DEMAND'|'EVENT', 'State': 'CREATING'|'CREATED'|'ACTIVATING'|'ACTIVATED'|'DEACTIVATING'|'DEACTIVATED'|'DELETING'|'UPDATING', 'Description': 'string', 'Schedule': 'string', 'Actions': [ { 'JobName': 'string', 'Arguments': { 'string': 'string' }, 'Timeout': 123, 'SecurityConfiguration': 'string', 'NotificationProperty': { 'NotifyDelayAfter': 123 }, 'CrawlerName': 'string' }, ], 'Predicate': { 'Logical': 'AND'|'ANY', 'Conditions': [ { 'LogicalOperator': 'EQUALS', 'JobName': 'string', 'State': 'STARTING'|'RUNNING'|'STOPPING'|'STOPPED'|'SUCCEEDED'|'FAILED'|'TIMEOUT'|'ERROR'|'WAITING', 'CrawlerName': 'string', 'CrawlState': 'RUNNING'|'CANCELLING'|'CANCELLED'|'SUCCEEDED'|'FAILED'|'ERROR' }, ] }, 'EventBatchingCondition': { 'BatchSize': 123, 'BatchWindow': 123 } } }
Response Structure
(dict) --
Trigger (dict) --
The resulting trigger definition.
Name (string) --
The name of the trigger.
WorkflowName (string) --
The name of the workflow associated with the trigger.
Id (string) --
Reserved for future use.
Type (string) --
The type of trigger that this is.
State (string) --
The current state of the trigger.
Description (string) --
A description of this trigger.
Schedule (string) --
A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
Actions (list) --
The actions initiated by this trigger.
(dict) --
Defines an action to be initiated by a trigger.
JobName (string) --
The name of a job to be run.
Arguments (dict) --
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
(string) --
(string) --
Timeout (integer) --
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. The default is 2,880 minutes (48 hours). This overrides the timeout value set in the parent job.
SecurityConfiguration (string) --
The name of the SecurityConfiguration structure to be used with this action.
NotificationProperty (dict) --
Specifies configuration properties of a job run notification.
NotifyDelayAfter (integer) --
After a job run starts, the number of minutes to wait before sending a job run delay notification.
CrawlerName (string) --
The name of the crawler to be used with this action.
Predicate (dict) --
The predicate of this trigger, which defines when it will fire.
Logical (string) --
An optional field if only one condition is listed. If multiple conditions are listed, then this field is required.
Conditions (list) --
A list of the conditions that determine when the trigger will fire.
(dict) --
Defines a condition under which a trigger fires.
LogicalOperator (string) --
A logical operator.
JobName (string) --
The name of the job whose JobRuns this condition applies to, and on which this trigger waits.
State (string) --
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED , STOPPED , FAILED , and TIMEOUT . The only crawler states that a trigger can listen for are SUCCEEDED , FAILED , and CANCELLED .
CrawlerName (string) --
The name of the crawler to which this condition applies.
CrawlState (string) --
The state of the crawler to which this condition applies.
EventBatchingCondition (dict) --
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
BatchSize (integer) --
Number of events that must be received from Amazon EventBridge before EventBridge event trigger fires.
BatchWindow (integer) --
Window of time in seconds after which EventBridge event trigger fires. Window starts when first event is received.