2024/11/22 - Amazon CloudWatch Logs - 9 new8 updated api methods
Changes Adds "Create field indexes to improve query performance and reduce scan volume" and "Transform logs during ingestion". Updates documentation for "PutLogEvents with Entity".
Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted.
After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events.
See also: AWS API Documentation
Request Syntax
client.delete_transformer( logGroupIdentifier='string' )
string
[REQUIRED]
Specify either the name or ARN of the log group to delete the transformer for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.
None
Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions.
See also: AWS API Documentation
Request Syntax
client.test_transformer( transformerConfig=[ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ], logEventMessages=[ 'string', ] )
list
[REQUIRED]
This structure contains the configuration of this log transformer that you want to test. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested.
(dict) --
This structure contains the information about one processor in a log transformer.
addKeys (dict) --
Use this parameter to include the addKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to add to the log event.
(dict) --
This object defines one key that will be added with the addKeys processor.
key (string) -- [REQUIRED]
The key of the new entry to be added to the log event
value (string) -- [REQUIRED]
The value of the new entry to be added to the log event
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is false.
copyValue (dict) --
Use this parameter to include the copyValue processor in your transformer.
entries (list) -- [REQUIRED]
An array of CopyValueEntry objects, where each object contains the information about one field value to copy.
(dict) --
This object defines one value to be copied with the copyValue processor.
source (string) -- [REQUIRED]
The key to copy.
target (string) -- [REQUIRED]
The key of the field to copy the value to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
csv (dict) --
Use this parameter to include the CSV processor in your transformer.
quoteCharacter (string) --
The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark " character is used.
delimiter (string) --
The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma , character as the delimiter.
columns (list) --
An array of names to use for the columns in the transformed log event.
If you omit this, default column names ( [column_1, column_2 ...]) are used.
(string) --
source (string) --
The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed.
dateTimeConverter (dict) --
Use this parameter to include the datetimeConverter processor in your transformer.
source (string) -- [REQUIRED]
The key to apply the date conversion to.
target (string) -- [REQUIRED]
The JSON field to store the result in.
targetFormat (string) --
The datetime format to use for the converted data in the target field.
If you omit this, the default of yyyy-MM-dd'T'HH:mm:ss.SSS'Z is used.
matchPatterns (list) -- [REQUIRED]
A list of patterns to match against the source field.
(string) --
sourceTimezone (string) --
The time zone of the source field. If you omit this, the default used is the UTC zone.
targetTimezone (string) --
The time zone of the target field. If you omit this, the default used is the UTC zone.
locale (string) --
The locale of the source field. If you omit this, the default of locale.ROOT is used.
deleteKeys (dict) --
Use this parameter to include the deleteKeys processor in your transformer.
withKeys (list) -- [REQUIRED]
The list of keys to delete.
(string) --
grok (dict) --
Use this parameter to include the grok processor in your transformer.
source (string) --
The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed.
match (string) -- [REQUIRED]
The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns.
listToMap (dict) --
Use this parameter to include the listToMap processor in your transformer.
source (string) -- [REQUIRED]
The key in the log event that has a list of objects that will be converted to a map.
key (string) -- [REQUIRED]
The key of the field to be extracted as keys in the generated map
valueKey (string) --
If this is specified, the values that you specify in this parameter will be extracted from the source objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map.
target (string) --
The key of the field that will hold the generated map
flatten (boolean) --
A Boolean value to indicate whether the list will be flattened into single items. Specify true to flatten the list. The default is false
flattenedElement (string) --
If you set flatten to true, use flattenedElement to specify which element, first or last, to keep.
You must specify this parameter if flatten is true
lowerCaseString (dict) --
Use this parameter to include the lowerCaseString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array caontaining the keys of the fields to convert to lowercase.
(string) --
moveKeys (dict) --
Use this parameter to include the moveKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to move.
(dict) --
This object defines one key that will be moved with the moveKey processor.
source (string) -- [REQUIRED]
The key to move.
target (string) -- [REQUIRED]
The key to move to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseCloudfront (dict) --
Use this parameter to include the parseCloudfront processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseJSON (dict) --
Use this parameter to include the parseJSON processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node.
parseKeyValue (dict) --
Use this parameter to include the parseKeyValue processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The destination field to put the extracted key-value pairs into
fieldDelimiter (string) --
The field delimiter string that is used between key-value pairs in the original log events. If you omit this, the ampersand & character is used.
keyValueDelimiter (string) --
The delimiter string to use between the key and value in each pair in the transformed log event.
If you omit this, the equal = character is used.
keyPrefix (string) --
If you want to add a prefix to all transformed keys, specify it here.
nonMatchValue (string) --
A value to insert into the value field in the result, when a key-value pair is not successfully split.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseRoute53 (dict) --
Use this parameter to include the parseRoute53 processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parsePostgres (dict) --
Use this parameter to include the parsePostGres processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseVPC (dict) --
Use this parameter to include the parseVPC processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseWAF (dict) --
Use this parameter to include the parseWAF processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
renameKeys (dict) --
Use this parameter to include the renameKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of RenameKeyEntry objects, where each object contains the information about a single key to rename.
(dict) --
This object defines one key that will be renamed with the renameKey processor.
key (string) -- [REQUIRED]
The key to rename
renameTo (string) -- [REQUIRED]
The string to use for the new key name
overwriteIfExists (boolean) --
Specifies whether to overwrite the existing value if the destination key already exists. The default is false
splitString (dict) --
Use this parameter to include the splitString processor in your transformer.
entries (list) -- [REQUIRED]
An array of SplitStringEntry objects, where each object contains the information about one field to split.
(dict) --
This object defines one log field that will be split with the splitString processor.
source (string) -- [REQUIRED]
The key of the field to split.
delimiter (string) -- [REQUIRED]
The separator characters to split the string entry on.
substituteString (dict) --
Use this parameter to include the substituteString processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to match and replace.
(dict) --
This object defines one log field key that will be replaced using the substituteString processor.
source (string) -- [REQUIRED]
The key to modify
from (string) -- [REQUIRED]
The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site.
to (string) -- [REQUIRED]
The string to be substituted for each match of from
trimString (dict) --
Use this parameter to include the trimString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array containing the keys of the fields to trim.
(string) --
typeConverter (dict) --
Use this parameter to include the typeConverter processor in your transformer.
entries (list) -- [REQUIRED]
An array of TypeConverterEntry objects, where each object contains the information about one field to change the type of.
(dict) --
This object defines one value type that will be converted using the typeConverter processor.
key (string) -- [REQUIRED]
The key with the value that is to be converted to a different type.
type (string) -- [REQUIRED]
The type to convert the field value to. Valid values are integer, double, string and boolean.
upperCaseString (dict) --
Use this parameter to include the upperCaseString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array of containing the keys of the field to convert to uppercase.
(string) --
list
[REQUIRED]
An array of the raw log events that you want to use to test this transformer.
(string) --
dict
Response Syntax
{ 'transformedLogs': [ { 'eventNumber': 123, 'eventMessage': 'string', 'transformedEventMessage': 'string' }, ] }
Response Structure
(dict) --
transformedLogs (list) --
An array where each member of the array includes both the original version and the transformed version of one of the log events that you input.
(dict) --
This structure contains information for one log event that has been processed by a log transformer.
eventNumber (integer) --
The event number.
eventMessage (string) --
The original log event message before it was transformed.
transformedEventMessage (string) --
The log event message after being transformed.
Returns the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy.
If a specified log group has a log-group level index policy, that policy is returned by this operation.
If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation.
To find information about only account-level policies, use DescribeAccountPolicies instead.
See also: AWS API Documentation
Request Syntax
client.describe_index_policies( logGroupIdentifiers=[ 'string', ], nextToken='string' )
list
[REQUIRED]
An array containing the name or ARN of the log group that you want to retrieve field index policies for.
(string) --
string
The token for the next set of items to return. The token expires after 24 hours.
dict
Response Syntax
{ 'indexPolicies': [ { 'logGroupIdentifier': 'string', 'lastUpdateTime': 123, 'policyDocument': 'string', 'policyName': 'string', 'source': 'ACCOUNT'|'LOG_GROUP' }, ], 'nextToken': 'string' }
Response Structure
(dict) --
indexPolicies (list) --
An array containing the field index policies.
(dict) --
This structure contains information about one field index policy in this account.
logGroupIdentifier (string) --
The ARN of the log group that this index policy applies to.
lastUpdateTime (integer) --
The date and time that this index policy was most recently updated.
policyDocument (string) --
The policy document for this index policy, in JSON format.
policyName (string) --
The name of this policy. Responses about log group-level field index policies don't have this field, because those policies don't have names.
source (string) --
This field indicates whether this is an account-level index policy or an index policy that applies only to a single log group.
nextToken (string) --
The token for the next set of items to return. The token expires after 24 hours.
Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries.
You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy.
If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events.
See also: AWS API Documentation
Request Syntax
client.delete_index_policy( logGroupIdentifier='string' )
string
[REQUIRED]
The log group to delete the index policy for. You can specify either the name or the ARN of the log group.
dict
Response Syntax
{}
Response Structure
(dict) --
Creates or updates a log transformer for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information.
After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use.
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can also set up a transformer at the account level. For more information, see PutAccountPolicy. If there is both a log-group level transformer created with PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
See also: AWS API Documentation
Request Syntax
client.put_transformer( logGroupIdentifier='string', transformerConfig=[ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ] )
string
[REQUIRED]
Specify either the name or ARN of the log group to create the transformer for.
list
[REQUIRED]
This structure contains the configuration of this log transformer. A log transformer is an array of processors, where each processor applies one type of transformation to the log events that are ingested.
(dict) --
This structure contains the information about one processor in a log transformer.
addKeys (dict) --
Use this parameter to include the addKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to add to the log event.
(dict) --
This object defines one key that will be added with the addKeys processor.
key (string) -- [REQUIRED]
The key of the new entry to be added to the log event
value (string) -- [REQUIRED]
The value of the new entry to be added to the log event
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is false.
copyValue (dict) --
Use this parameter to include the copyValue processor in your transformer.
entries (list) -- [REQUIRED]
An array of CopyValueEntry objects, where each object contains the information about one field value to copy.
(dict) --
This object defines one value to be copied with the copyValue processor.
source (string) -- [REQUIRED]
The key to copy.
target (string) -- [REQUIRED]
The key of the field to copy the value to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
csv (dict) --
Use this parameter to include the CSV processor in your transformer.
quoteCharacter (string) --
The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark " character is used.
delimiter (string) --
The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma , character as the delimiter.
columns (list) --
An array of names to use for the columns in the transformed log event.
If you omit this, default column names ( [column_1, column_2 ...]) are used.
(string) --
source (string) --
The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed.
dateTimeConverter (dict) --
Use this parameter to include the datetimeConverter processor in your transformer.
source (string) -- [REQUIRED]
The key to apply the date conversion to.
target (string) -- [REQUIRED]
The JSON field to store the result in.
targetFormat (string) --
The datetime format to use for the converted data in the target field.
If you omit this, the default of yyyy-MM-dd'T'HH:mm:ss.SSS'Z is used.
matchPatterns (list) -- [REQUIRED]
A list of patterns to match against the source field.
(string) --
sourceTimezone (string) --
The time zone of the source field. If you omit this, the default used is the UTC zone.
targetTimezone (string) --
The time zone of the target field. If you omit this, the default used is the UTC zone.
locale (string) --
The locale of the source field. If you omit this, the default of locale.ROOT is used.
deleteKeys (dict) --
Use this parameter to include the deleteKeys processor in your transformer.
withKeys (list) -- [REQUIRED]
The list of keys to delete.
(string) --
grok (dict) --
Use this parameter to include the grok processor in your transformer.
source (string) --
The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed.
match (string) -- [REQUIRED]
The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns.
listToMap (dict) --
Use this parameter to include the listToMap processor in your transformer.
source (string) -- [REQUIRED]
The key in the log event that has a list of objects that will be converted to a map.
key (string) -- [REQUIRED]
The key of the field to be extracted as keys in the generated map
valueKey (string) --
If this is specified, the values that you specify in this parameter will be extracted from the source objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map.
target (string) --
The key of the field that will hold the generated map
flatten (boolean) --
A Boolean value to indicate whether the list will be flattened into single items. Specify true to flatten the list. The default is false
flattenedElement (string) --
If you set flatten to true, use flattenedElement to specify which element, first or last, to keep.
You must specify this parameter if flatten is true
lowerCaseString (dict) --
Use this parameter to include the lowerCaseString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array caontaining the keys of the fields to convert to lowercase.
(string) --
moveKeys (dict) --
Use this parameter to include the moveKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to move.
(dict) --
This object defines one key that will be moved with the moveKey processor.
source (string) -- [REQUIRED]
The key to move.
target (string) -- [REQUIRED]
The key to move to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseCloudfront (dict) --
Use this parameter to include the parseCloudfront processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseJSON (dict) --
Use this parameter to include the parseJSON processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node.
parseKeyValue (dict) --
Use this parameter to include the parseKeyValue processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The destination field to put the extracted key-value pairs into
fieldDelimiter (string) --
The field delimiter string that is used between key-value pairs in the original log events. If you omit this, the ampersand & character is used.
keyValueDelimiter (string) --
The delimiter string to use between the key and value in each pair in the transformed log event.
If you omit this, the equal = character is used.
keyPrefix (string) --
If you want to add a prefix to all transformed keys, specify it here.
nonMatchValue (string) --
A value to insert into the value field in the result, when a key-value pair is not successfully split.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseRoute53 (dict) --
Use this parameter to include the parseRoute53 processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parsePostgres (dict) --
Use this parameter to include the parsePostGres processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseVPC (dict) --
Use this parameter to include the parseVPC processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseWAF (dict) --
Use this parameter to include the parseWAF processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
renameKeys (dict) --
Use this parameter to include the renameKeys processor in your transformer.
entries (list) -- [REQUIRED]
An array of RenameKeyEntry objects, where each object contains the information about a single key to rename.
(dict) --
This object defines one key that will be renamed with the renameKey processor.
key (string) -- [REQUIRED]
The key to rename
renameTo (string) -- [REQUIRED]
The string to use for the new key name
overwriteIfExists (boolean) --
Specifies whether to overwrite the existing value if the destination key already exists. The default is false
splitString (dict) --
Use this parameter to include the splitString processor in your transformer.
entries (list) -- [REQUIRED]
An array of SplitStringEntry objects, where each object contains the information about one field to split.
(dict) --
This object defines one log field that will be split with the splitString processor.
source (string) -- [REQUIRED]
The key of the field to split.
delimiter (string) -- [REQUIRED]
The separator characters to split the string entry on.
substituteString (dict) --
Use this parameter to include the substituteString processor in your transformer.
entries (list) -- [REQUIRED]
An array of objects, where each object contains the information about one key to match and replace.
(dict) --
This object defines one log field key that will be replaced using the substituteString processor.
source (string) -- [REQUIRED]
The key to modify
from (string) -- [REQUIRED]
The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site.
to (string) -- [REQUIRED]
The string to be substituted for each match of from
trimString (dict) --
Use this parameter to include the trimString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array containing the keys of the fields to trim.
(string) --
typeConverter (dict) --
Use this parameter to include the typeConverter processor in your transformer.
entries (list) -- [REQUIRED]
An array of TypeConverterEntry objects, where each object contains the information about one field to change the type of.
(dict) --
This object defines one value type that will be converted using the typeConverter processor.
key (string) -- [REQUIRED]
The key with the value that is to be converted to a different type.
type (string) -- [REQUIRED]
The type to convert the field value to. Valid values are integer, double, string and boolean.
upperCaseString (dict) --
Use this parameter to include the upperCaseString processor in your transformer.
withKeys (list) -- [REQUIRED]
The array of containing the keys of the field to convert to uppercase.
(string) --
None
Creates or updates a field index policy for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes.
You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs.
To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId IN [value, value, ...] will process fewer log events to reduce costs, and have improved performance.
Each index policy has the following quotas and restrictions:
As many as 20 fields can be included in the policy.
Each field name can include as many as 100 characters.
Matches of log events to the names of indexed fields are case-sensitive. For example, a field index of RequestId won't match a log event containing requestId.
Log group-level field index policies created with PutIndexPolicy override account-level field index policies created with PutAccountPolicy. If you use PutIndexPolicy to create a field index policy for a log group, that log group uses only that policy. The log group ignores any account-wide field index policy that you might have created.
See also: AWS API Documentation
Request Syntax
client.put_index_policy( logGroupIdentifier='string', policyDocument='string' )
string
[REQUIRED]
Specify either the log group name or log group ARN to apply this field index policy to. If you specify an ARN, use the format arn:aws:logs:region:account-id:log-group:log_group_name Don't include an * at the end.
string
[REQUIRED]
The index policy document, in JSON format. The following is an example of an index policy document that creates two indexes, RequestId and TransactionId.
"policyDocument": "{ "Fields": [ "RequestId", "TransactionId" ] }"
The policy document must include at least one field index. For more information about the fields that can be included and other restrictions, see Field index syntax and quotas.
dict
Response Syntax
{ 'indexPolicy': { 'logGroupIdentifier': 'string', 'lastUpdateTime': 123, 'policyDocument': 'string', 'policyName': 'string', 'source': 'ACCOUNT'|'LOG_GROUP' } }
Response Structure
(dict) --
indexPolicy (dict) --
The index policy that you just created or updated.
logGroupIdentifier (string) --
The ARN of the log group that this index policy applies to.
lastUpdateTime (integer) --
The date and time that this index policy was most recently updated.
policyDocument (string) --
The policy document for this index policy, in JSON format.
policyName (string) --
The name of this policy. Responses about log group-level field index policies don't have this field, because those policies don't have names.
source (string) --
This field indicates whether this is an account-level index policy or an index policy that applies only to a single log group.
Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy.
See also: AWS API Documentation
Request Syntax
client.describe_field_indexes( logGroupIdentifiers=[ 'string', ], nextToken='string' )
list
[REQUIRED]
An array containing the names or ARNs of the log groups that you want to retrieve field indexes for.
(string) --
string
The token for the next set of items to return. The token expires after 24 hours.
dict
Response Syntax
{ 'fieldIndexes': [ { 'logGroupIdentifier': 'string', 'fieldIndexName': 'string', 'lastScanTime': 123, 'firstEventTime': 123, 'lastEventTime': 123 }, ], 'nextToken': 'string' }
Response Structure
(dict) --
fieldIndexes (list) --
An array containing the field index information.
(dict) --
This structure describes one log event field that is used as an index in at least one index policy in this account.
logGroupIdentifier (string) --
If this field index appears in an index policy that applies only to a single log group, the ARN of that log group is displayed here.
fieldIndexName (string) --
The string that this field index matches.
lastScanTime (integer) --
The most recent time that CloudWatch Logs scanned ingested log events to search for this field index to improve the speed of future CloudWatch Logs Insights queries that search for this field index.
firstEventTime (integer) --
The time and date of the earliest log event that matches this field index, after the index policy that contains it was created.
lastEventTime (integer) --
The time and date of the most recent log event that matches this field index.
nextToken (string) --
The token for the next set of items to return. The token expires after 24 hours.
Returns the information about the log transformer associated with this log group.
This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies.
See also: AWS API Documentation
Request Syntax
client.get_transformer( logGroupIdentifier='string' )
string
[REQUIRED]
Specify either the name or ARN of the log group to return transformer information for. If the log group is in a source account and you are using a monitoring account, you must use the log group ARN.
dict
Response Syntax
{ 'logGroupIdentifier': 'string', 'creationTime': 123, 'lastModifiedTime': 123, 'transformerConfig': [ { 'addKeys': { 'entries': [ { 'key': 'string', 'value': 'string', 'overwriteIfExists': True|False }, ] }, 'copyValue': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'csv': { 'quoteCharacter': 'string', 'delimiter': 'string', 'columns': [ 'string', ], 'source': 'string' }, 'dateTimeConverter': { 'source': 'string', 'target': 'string', 'targetFormat': 'string', 'matchPatterns': [ 'string', ], 'sourceTimezone': 'string', 'targetTimezone': 'string', 'locale': 'string' }, 'deleteKeys': { 'withKeys': [ 'string', ] }, 'grok': { 'source': 'string', 'match': 'string' }, 'listToMap': { 'source': 'string', 'key': 'string', 'valueKey': 'string', 'target': 'string', 'flatten': True|False, 'flattenedElement': 'first'|'last' }, 'lowerCaseString': { 'withKeys': [ 'string', ] }, 'moveKeys': { 'entries': [ { 'source': 'string', 'target': 'string', 'overwriteIfExists': True|False }, ] }, 'parseCloudfront': { 'source': 'string' }, 'parseJSON': { 'source': 'string', 'destination': 'string' }, 'parseKeyValue': { 'source': 'string', 'destination': 'string', 'fieldDelimiter': 'string', 'keyValueDelimiter': 'string', 'keyPrefix': 'string', 'nonMatchValue': 'string', 'overwriteIfExists': True|False }, 'parseRoute53': { 'source': 'string' }, 'parsePostgres': { 'source': 'string' }, 'parseVPC': { 'source': 'string' }, 'parseWAF': { 'source': 'string' }, 'renameKeys': { 'entries': [ { 'key': 'string', 'renameTo': 'string', 'overwriteIfExists': True|False }, ] }, 'splitString': { 'entries': [ { 'source': 'string', 'delimiter': 'string' }, ] }, 'substituteString': { 'entries': [ { 'source': 'string', 'from': 'string', 'to': 'string' }, ] }, 'trimString': { 'withKeys': [ 'string', ] }, 'typeConverter': { 'entries': [ { 'key': 'string', 'type': 'boolean'|'integer'|'double'|'string' }, ] }, 'upperCaseString': { 'withKeys': [ 'string', ] } }, ] }
Response Structure
(dict) --
logGroupIdentifier (string) --
The ARN of the log group that you specified in your request.
creationTime (integer) --
The creation time of the transformer, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
lastModifiedTime (integer) --
The date and time when this transformer was most recently modified, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
transformerConfig (list) --
This sructure contains the configuration of the requested transformer.
(dict) --
This structure contains the information about one processor in a log transformer.
addKeys (dict) --
Use this parameter to include the addKeys processor in your transformer.
entries (list) --
An array of objects, where each object contains the information about one key to add to the log event.
(dict) --
This object defines one key that will be added with the addKeys processor.
key (string) --
The key of the new entry to be added to the log event
value (string) --
The value of the new entry to be added to the log event
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the key already exists in the log event. If you omit this, the default is false.
copyValue (dict) --
Use this parameter to include the copyValue processor in your transformer.
entries (list) --
An array of CopyValueEntry objects, where each object contains the information about one field value to copy.
(dict) --
This object defines one value to be copied with the copyValue processor.
source (string) --
The key to copy.
target (string) --
The key of the field to copy the value to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
csv (dict) --
Use this parameter to include the CSV processor in your transformer.
quoteCharacter (string) --
The character used used as a text qualifier for a single column of data. If you omit this, the double quotation mark " character is used.
delimiter (string) --
The character used to separate each column in the original comma-separated value log event. If you omit this, the processor looks for the comma , character as the delimiter.
columns (list) --
An array of names to use for the columns in the transformed log event.
If you omit this, default column names ( [column_1, column_2 ...]) are used.
(string) --
source (string) --
The path to the field in the log event that has the comma separated values to be parsed. If you omit this value, the whole log message is processed.
dateTimeConverter (dict) --
Use this parameter to include the datetimeConverter processor in your transformer.
source (string) --
The key to apply the date conversion to.
target (string) --
The JSON field to store the result in.
targetFormat (string) --
The datetime format to use for the converted data in the target field.
If you omit this, the default of yyyy-MM-dd'T'HH:mm:ss.SSS'Z is used.
matchPatterns (list) --
A list of patterns to match against the source field.
(string) --
sourceTimezone (string) --
The time zone of the source field. If you omit this, the default used is the UTC zone.
targetTimezone (string) --
The time zone of the target field. If you omit this, the default used is the UTC zone.
locale (string) --
The locale of the source field. If you omit this, the default of locale.ROOT is used.
deleteKeys (dict) --
Use this parameter to include the deleteKeys processor in your transformer.
withKeys (list) --
The list of keys to delete.
(string) --
grok (dict) --
Use this parameter to include the grok processor in your transformer.
source (string) --
The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed.
match (string) --
The grok pattern to match against the log event. For a list of supported grok patterns, see Supported grok patterns.
listToMap (dict) --
Use this parameter to include the listToMap processor in your transformer.
source (string) --
The key in the log event that has a list of objects that will be converted to a map.
key (string) --
The key of the field to be extracted as keys in the generated map
valueKey (string) --
If this is specified, the values that you specify in this parameter will be extracted from the source objects and put into the values of the generated map. Otherwise, original objects in the source list will be put into the values of the generated map.
target (string) --
The key of the field that will hold the generated map
flatten (boolean) --
A Boolean value to indicate whether the list will be flattened into single items. Specify true to flatten the list. The default is false
flattenedElement (string) --
If you set flatten to true, use flattenedElement to specify which element, first or last, to keep.
You must specify this parameter if flatten is true
lowerCaseString (dict) --
Use this parameter to include the lowerCaseString processor in your transformer.
withKeys (list) --
The array caontaining the keys of the fields to convert to lowercase.
(string) --
moveKeys (dict) --
Use this parameter to include the moveKeys processor in your transformer.
entries (list) --
An array of objects, where each object contains the information about one key to move.
(dict) --
This object defines one key that will be moved with the moveKey processor.
source (string) --
The key to move.
target (string) --
The key to move to.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseCloudfront (dict) --
Use this parameter to include the parseCloudfront processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseJSON (dict) --
Use this parameter to include the parseJSON processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The location to put the parsed key value pair into. If you omit this parameter, it is placed under the root node.
parseKeyValue (dict) --
Use this parameter to include the parseKeyValue processor in your transformer.
source (string) --
Path to the field in the log event that will be parsed. Use dot notation to access child fields. For example, store.book
destination (string) --
The destination field to put the extracted key-value pairs into
fieldDelimiter (string) --
The field delimiter string that is used between key-value pairs in the original log events. If you omit this, the ampersand & character is used.
keyValueDelimiter (string) --
The delimiter string to use between the key and value in each pair in the transformed log event.
If you omit this, the equal = character is used.
keyPrefix (string) --
If you want to add a prefix to all transformed keys, specify it here.
nonMatchValue (string) --
A value to insert into the value field in the result, when a key-value pair is not successfully split.
overwriteIfExists (boolean) --
Specifies whether to overwrite the value if the destination key already exists. If you omit this, the default is false.
parseRoute53 (dict) --
Use this parameter to include the parseRoute53 processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parsePostgres (dict) --
Use this parameter to include the parsePostGres processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseVPC (dict) --
Use this parameter to include the parseVPC processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
parseWAF (dict) --
Use this parameter to include the parseWAF processor in your transformer.
If you use this processor, it must be the first processor in your transformer.
source (string) --
Omit this parameter and the whole log message will be processed by this processor. No other value than @message is allowed for source.
renameKeys (dict) --
Use this parameter to include the renameKeys processor in your transformer.
entries (list) --
An array of RenameKeyEntry objects, where each object contains the information about a single key to rename.
(dict) --
This object defines one key that will be renamed with the renameKey processor.
key (string) --
The key to rename
renameTo (string) --
The string to use for the new key name
overwriteIfExists (boolean) --
Specifies whether to overwrite the existing value if the destination key already exists. The default is false
splitString (dict) --
Use this parameter to include the splitString processor in your transformer.
entries (list) --
An array of SplitStringEntry objects, where each object contains the information about one field to split.
(dict) --
This object defines one log field that will be split with the splitString processor.
source (string) --
The key of the field to split.
delimiter (string) --
The separator characters to split the string entry on.
substituteString (dict) --
Use this parameter to include the substituteString processor in your transformer.
entries (list) --
An array of objects, where each object contains the information about one key to match and replace.
(dict) --
This object defines one log field key that will be replaced using the substituteString processor.
source (string) --
The key to modify
from (string) --
The regular expression string to be replaced. Special regex characters such as [ and ] must be escaped using \ when using double quotes and with when using single quotes. For more information, see Class Pattern on the Oracle web site.
to (string) --
The string to be substituted for each match of from
trimString (dict) --
Use this parameter to include the trimString processor in your transformer.
withKeys (list) --
The array containing the keys of the fields to trim.
(string) --
typeConverter (dict) --
Use this parameter to include the typeConverter processor in your transformer.
entries (list) --
An array of TypeConverterEntry objects, where each object contains the information about one field to change the type of.
(dict) --
This object defines one value type that will be converted using the typeConverter processor.
key (string) --
The key with the value that is to be converted to a different type.
type (string) --
The type to convert the field value to. Valid values are integer, double, string and boolean.
upperCaseString (dict) --
Use this parameter to include the upperCaseString processor in your transformer.
withKeys (list) --
The array of containing the keys of the field to convert to uppercase.
(string) --
Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query. This can be useful for queries that use log group name prefixes or the filterIndex command, because the log groups are dynamically selected in these cases.
For more information about field indexes, see Create field indexes to improve query performance and reduce costs.
See also: AWS API Documentation
Request Syntax
client.list_log_groups_for_query( queryId='string', nextToken='string', maxResults=123 )
string
[REQUIRED]
The ID of the query to use. This query ID is from the response to your StartQuery operation.
string
The token for the next set of items to return. The token expires after 24 hours.
integer
Limits the number of returned log groups to the specified number.
dict
Response Syntax
{ 'logGroupIdentifiers': [ 'string', ], 'nextToken': 'string' }
Response Structure
(dict) --
logGroupIdentifiers (list) --
An array of the names and ARNs of the log groups that were processed in the query.
(string) --
nextToken (string) --
The token for the next set of items to return. The token expires after 24 hours.
{'policyType': {'TRANSFORMER_POLICY', 'FIELD_INDEX_POLICY'}}
Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.
To delete a data protection policy, you must have the logs:DeleteDataProtectionPolicy and logs:DeleteAccountPolicy permissions.
To delete a subscription filter policy, you must have the logs:DeleteSubscriptionFilter and logs:DeleteAccountPolicy permissions.
To delete a transformer policy, you must have the logs:DeleteTransformer and logs:DeleteAccountPolicy permissions.
To delete a field index policy, you must have the logs:DeleteIndexPolicy and logs:DeleteAccountPolicy permissions.
If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries.
See also: AWS API Documentation
Request Syntax
client.delete_account_policy( policyName='string', policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY' )
string
[REQUIRED]
The name of the policy to delete.
string
[REQUIRED]
The type of policy to delete.
None
{'nextToken': 'string', 'policyType': {'TRANSFORMER_POLICY', 'FIELD_INDEX_POLICY'}}Response
{'accountPolicies': {'policyType': {'FIELD_INDEX_POLICY', 'TRANSFORMER_POLICY'}}, 'nextToken': 'string'}
Returns a list of all CloudWatch Logs account policies in the account.
See also: AWS API Documentation
Request Syntax
client.describe_account_policies( policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY', policyName='string', accountIdentifiers=[ 'string', ], nextToken='string' )
string
[REQUIRED]
Use this parameter to limit the returned policies to only the policies that match the policy type that you specify.
string
Use this parameter to limit the returned policies to only the policy with the name that you specify.
list
If you are using an account that is set up as a monitoring account for CloudWatch unified cross-account observability, you can use this to specify the account ID of a source account. If you do, the operation returns the account policy for the specified account. Currently, you can specify only one account ID in this parameter.
If you omit this parameter, only the policy in the current account is returned.
(string) --
string
The token for the next set of items to return. (You received this token from a previous call.)
dict
Response Syntax
{ 'accountPolicies': [ { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyType': 'DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY', 'scope': 'ALL', 'selectionCriteria': 'string', 'accountId': 'string' }, ], 'nextToken': 'string' }
Response Structure
(dict) --
accountPolicies (list) --
An array of structures that contain information about the CloudWatch Logs account policies that match the specified filters.
(dict) --
A structure that contains information about one CloudWatch Logs account policy.
policyName (string) --
The name of the account policy.
policyDocument (string) --
The policy document for this account policy.
The JSON specified in policyDocument can be up to 30,720 characters.
lastUpdatedTime (integer) --
The date and time that this policy was most recently updated.
policyType (string) --
The type of policy for this account policy.
scope (string) --
The scope of the account policy.
selectionCriteria (string) --
The log group selection criteria that is used for this policy.
accountId (string) --
The Amazon Web Services account ID that the policy applies to.
nextToken (string) --
The token to use when requesting the next set of items. The token expires after 24 hours.
{'metricFilters': {'applyOnTransformedLogs': 'boolean'}}
Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.
See also: AWS API Documentation
Request Syntax
client.describe_metric_filters( logGroupName='string', filterNamePrefix='string', nextToken='string', limit=123, metricName='string', metricNamespace='string' )
string
The name of the log group.
string
The prefix to match. CloudWatch Logs uses the value that you set here only if you also include the logGroupName parameter in your request.
string
The token for the next set of items to return. (You received this token from a previous call.)
integer
The maximum number of items returned. If you don't specify a value, the default is up to 50 items.
string
Filters results to include only those with the specified metric name. If you include this parameter in your request, you must also include the metricNamespace parameter.
string
Filters results to include only those in the specified namespace. If you include this parameter in your request, you must also include the metricName parameter.
dict
Response Syntax
{ 'metricFilters': [ { 'filterName': 'string', 'filterPattern': 'string', 'metricTransformations': [ { 'metricName': 'string', 'metricNamespace': 'string', 'metricValue': 'string', 'defaultValue': 123.0, 'dimensions': { 'string': 'string' }, 'unit': 'Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None' }, ], 'creationTime': 123, 'logGroupName': 'string', 'applyOnTransformedLogs': True|False }, ], 'nextToken': 'string' }
Response Structure
(dict) --
metricFilters (list) --
The metric filters.
(dict) --
Metric filters express how CloudWatch Logs would extract metric observations from ingested log events and transform them into metric data in a CloudWatch metric.
filterName (string) --
The name of the metric filter.
filterPattern (string) --
A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.
metricTransformations (list) --
The metric transformations.
(dict) --
Indicates how to transform ingested log events to metric data in a CloudWatch metric.
metricName (string) --
The name of the CloudWatch metric.
metricNamespace (string) --
A custom namespace to contain your metric in CloudWatch. Use namespaces to group together metrics that are similar. For more information, see Namespaces.
metricValue (string) --
The value to publish to the CloudWatch metric when a filter pattern matches a log event.
defaultValue (float) --
(Optional) The value to emit when a filter pattern does not match a log event. This value can be null.
dimensions (dict) --
The fields to use as dimensions for the metric. One metric filter can include as many as three dimensions.
(string) --
(string) --
unit (string) --
The unit to assign to the metric. If you omit this, the unit is set as None.
creationTime (integer) --
The creation time of the metric filter, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
logGroupName (string) --
The name of the log group.
applyOnTransformedLogs (boolean) --
This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.
If this value is true, the metric filter is applied on the transformed version of the log events instead of the original ingested log events.
nextToken (string) --
The token for the next set of items to return. The token expires after 24 hours.
{'subscriptionFilters': {'applyOnTransformedLogs': 'boolean'}}
Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.
See also: AWS API Documentation
Request Syntax
client.describe_subscription_filters( logGroupName='string', filterNamePrefix='string', nextToken='string', limit=123 )
string
[REQUIRED]
The name of the log group.
string
The prefix to match. If you don't specify a value, no prefix filter is applied.
string
The token for the next set of items to return. (You received this token from a previous call.)
integer
The maximum number of items returned. If you don't specify a value, the default is up to 50 items.
dict
Response Syntax
{ 'subscriptionFilters': [ { 'filterName': 'string', 'logGroupName': 'string', 'filterPattern': 'string', 'destinationArn': 'string', 'roleArn': 'string', 'distribution': 'Random'|'ByLogStream', 'applyOnTransformedLogs': True|False, 'creationTime': 123 }, ], 'nextToken': 'string' }
Response Structure
(dict) --
subscriptionFilters (list) --
The subscription filters.
(dict) --
Represents a subscription filter.
filterName (string) --
The name of the subscription filter.
logGroupName (string) --
The name of the log group.
filterPattern (string) --
A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.
destinationArn (string) --
The Amazon Resource Name (ARN) of the destination.
roleArn (string) --
distribution (string) --
The method used to distribute log data to the destination, which can be either random or grouped by log stream.
applyOnTransformedLogs (boolean) --
This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.
If this value is true, the subscription filter is applied on the transformed version of the log events instead of the original ingested log events.
creationTime (integer) --
The creation time of the subscription filter, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
nextToken (string) --
The token for the next set of items to return. The token expires after 24 hours.
{'statistics': {'estimatedBytesSkipped': 'double', 'estimatedRecordsSkipped': 'double', 'logGroupsScanned': 'double'}}
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a @ptr field, which is the identifier for the log record. You can use the value of @ptr in a GetLogRecord operation to get the full log record.
GetQueryResults does not start running a query. To run a query, use StartQuery. For more information about how long results of previous queries are available, see CloudWatch Logs quotas.
If the value of the Status field in the output is Running, this operation returns only partial results. If you see a value of Scheduled or Running for the status, you can retry the operation later to see the final results.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross-account observability.
See also: AWS API Documentation
Request Syntax
client.get_query_results( queryId='string' )
string
[REQUIRED]
The ID number of the query.
dict
Response Syntax
{ 'results': [ [ { 'field': 'string', 'value': 'string' }, ], ], 'statistics': { 'recordsMatched': 123.0, 'recordsScanned': 123.0, 'estimatedRecordsSkipped': 123.0, 'bytesScanned': 123.0, 'estimatedBytesSkipped': 123.0, 'logGroupsScanned': 123.0 }, 'status': 'Scheduled'|'Running'|'Complete'|'Failed'|'Cancelled'|'Timeout'|'Unknown', 'encryptionKey': 'string' }
Response Structure
(dict) --
results (list) --
The log events that matched the query criteria during the most recent time it ran.
The results value is an array of arrays. Each log event is one object in the top-level array. Each of these log event objects is an array of field/ value pairs.
(list) --
(dict) --
Contains one field from one log event returned by a CloudWatch Logs Insights query, along with the value of that field.
For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields.
field (string) --
The log event field.
value (string) --
The value of this field.
statistics (dict) --
Includes the number of log events scanned by the query, the number of log events that matched the query criteria, and the total number of bytes in the scanned log events. These values reflect the full raw results of the query.
recordsMatched (float) --
The number of log events that matched the query string.
recordsScanned (float) --
The total number of log events scanned during the query.
estimatedRecordsSkipped (float) --
An estimate of the number of log events that were skipped when processing this query, because the query contained an indexed field. Skipping these entries lowers query costs and improves the query performance time. For more information about field indexes, see PutIndexPolicy.
bytesScanned (float) --
The total number of bytes in the log events scanned during the query.
estimatedBytesSkipped (float) --
An estimate of the number of bytes in the log events that were skipped when processing this query, because the query contained an indexed field. Skipping these entries lowers query costs and improves the query performance time. For more information about field indexes, see PutIndexPolicy.
logGroupsScanned (float) --
The number of log groups that were scanned by this query.
status (string) --
The status of the most recent running of the query. Possible values are Cancelled, Complete, Failed, Running, Scheduled, Timeout, and Unknown.
Queries time out after 60 minutes of runtime. To avoid having your queries time out, reduce the time range being searched or partition your query into a number of queries.
encryptionKey (string) --
If you associated an KMS key with the CloudWatch Logs Insights query results in this account, this field displays the ARN of the key that's used to encrypt the query results when StartQuery stores them.
{'policyType': {'TRANSFORMER_POLICY', 'FIELD_INDEX_POLICY'}}Response
{'accountPolicy': {'policyType': {'TRANSFORMER_POLICY', 'FIELD_INDEX_POLICY'}}}
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account.
Data protection policy
A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.
If you use PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
To use the PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.
The PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.
Subscription filter policy
A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
An Firehose data stream in the same account as the subscription policy, for same-account delivery.
A Lambda function in the same account as the subscription policy, for same-account delivery.
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
Each account can have one account-level subscription filter policy per Region. If you are updating an existing filter, you must specify the correct name in PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.
Transformer policy
Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use.
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all log groups in the account. Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with the selectionCriteria parameter. If you have multiple account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.
You can also set up a transformer at the log-group level. For more information, see PutTransformer. If there is both a log-group level transformer created with PutTransformer and an account-level transformer that could apply to the same log group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
Field index policy
You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs
To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for requestId. Then, any CloudWatch Logs Insights query on that log group that includes requestId = value or requestId in [value, value, ...] will attempt to process only the log events where the indexed field matches the specified value.
Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field of RequestId won't match a log event containing requestId.
You can have one account-level field index policy that applies to all log groups in the account. Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with the selectionCriteria parameter. If you have multiple account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes. For example, if you have one policy filtered to log groups that start with my-log, you can't have another field index policy filtered to my-logpprod or my-logging.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of PutAccountPolicy. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy that you create with PutAccountPolicy.
See also: AWS API Documentation
Request Syntax
client.put_account_policy( policyName='string', policyDocument='string', policyType='DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY', scope='ALL', selectionCriteria='string' )
string
[REQUIRED]
A name for the policy. This must be unique within the account.
string
[REQUIRED]
Specify the policy, in JSON.
Data protection policy
A data protection policy must include two JSON blocks:
The first block must include both a DataIdentifer array and an Operation property with an Audit action. The DataIdentifer array lists the types of sensitive data that you want to mask. For more information about the available options, see Types of data that you can mask. The Operation property with an Audit action is required to find the sensitive data terms. This Audit action must contain a FindingsDestination object. You can optionally use that FindingsDestination object to list one or more destinations to send audit findings to. If you specify destinations such as log groups, Firehose streams, and S3 buckets, they must already exist.
The second block must include both a DataIdentifer array and an Operation property with an Deidentify action. The DataIdentifer array must exactly match the DataIdentifer array in the first block of the policy. The Operation property with the Deidentify action is what actually masks the data, and it must contain the "MaskConfig": {} object. The "MaskConfig": {} object must be empty.
For an example data protection policy, see the Examples section on this page.
In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.
The JSON specified in policyDocument can be up to 30,720 characters long.
Subscription filter policy
A subscription filter policy can include the following attributes in a JSON block:
DestinationArn The ARN of the destination to deliver log events to. Supported destinations are:
An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
An Firehose data stream in the same account as the subscription policy, for same-account delivery.
A Lambda function in the same account as the subscription policy, for same-account delivery.
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
RoleArn The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery.
FilterPattern A filter pattern for subscribing to a filtered stream of log events.
Distribution The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to Random for a more even distribution. This property is only applicable when the destination is an Kinesis Data Streams data stream.
Transformer policy
A transformer policy must include one JSON block with the array of processors and their configurations. For more information about available processors, see Processors that you can use.
Field index policy
A field index filter policy can include the following attribute in a JSON block:
Fields The array of field indexes to create.
It must contain at least one field index.
The following is an example of an index policy document that creates two indexes, RequestId and TransactionId.
"policyDocument": "{ \"Fields\": [ \"RequestId\", \"TransactionId\" ] }"
string
[REQUIRED]
The type of policy that you're creating or updating.
string
Currently the only valid value for this parameter is ALL, which specifies that the data protection policy applies to all log groups in the account. If you omit this parameter, the default of ALL is used.
string
Use this parameter to apply the new policy to a subset of log groups in the account.
Specifing selectionCriteria is valid only when you specify SUBSCRIPTION_FILTER_POLICY, FIELD_INDEX_POLICY or TRANSFORMER_POLICY``for ``policyType.
If policyType is SUBSCRIPTION_FILTER_POLICY, the only supported selectionCriteria filter is LogGroupName NOT IN []
If policyType is FIELD_INDEX_POLICY or TRANSFORMER_POLICY, the only supported selectionCriteria filter is LogGroupNamePrefix
The selectionCriteria string can be up to 25KB in length. The length is determined by using its UTF-8 bytes.
Using the selectionCriteria parameter with SUBSCRIPTION_FILTER_POLICY is useful to help prevent infinite loops. For more information, see Log recursion prevention.
dict
Response Syntax
{ 'accountPolicy': { 'policyName': 'string', 'policyDocument': 'string', 'lastUpdatedTime': 123, 'policyType': 'DATA_PROTECTION_POLICY'|'SUBSCRIPTION_FILTER_POLICY'|'FIELD_INDEX_POLICY'|'TRANSFORMER_POLICY', 'scope': 'ALL', 'selectionCriteria': 'string', 'accountId': 'string' } }
Response Structure
(dict) --
accountPolicy (dict) --
The account policy that you created.
policyName (string) --
The name of the account policy.
policyDocument (string) --
The policy document for this account policy.
The JSON specified in policyDocument can be up to 30,720 characters.
lastUpdatedTime (integer) --
The date and time that this policy was most recently updated.
policyType (string) --
The type of policy for this account policy.
scope (string) --
The scope of the account policy.
selectionCriteria (string) --
The log group selection criteria that is used for this policy.
accountId (string) --
The Amazon Web Services account ID that the policy applies to.
{'applyOnTransformedLogs': 'boolean'}
Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents.
The maximum number of metric filters that can be associated with a log group is 100.
Using regular expressions to create metric filters is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in metric filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.
See also: AWS API Documentation
Request Syntax
client.put_metric_filter( logGroupName='string', filterName='string', filterPattern='string', metricTransformations=[ { 'metricName': 'string', 'metricNamespace': 'string', 'metricValue': 'string', 'defaultValue': 123.0, 'dimensions': { 'string': 'string' }, 'unit': 'Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None' }, ], applyOnTransformedLogs=True|False )
string
[REQUIRED]
The name of the log group.
string
[REQUIRED]
A name for the metric filter.
string
[REQUIRED]
A filter pattern for extracting metric data out of ingested log events.
list
[REQUIRED]
A collection of information that defines how metric data gets emitted.
(dict) --
Indicates how to transform ingested log events to metric data in a CloudWatch metric.
metricName (string) -- [REQUIRED]
The name of the CloudWatch metric.
metricNamespace (string) -- [REQUIRED]
A custom namespace to contain your metric in CloudWatch. Use namespaces to group together metrics that are similar. For more information, see Namespaces.
metricValue (string) -- [REQUIRED]
The value to publish to the CloudWatch metric when a filter pattern matches a log event.
defaultValue (float) --
(Optional) The value to emit when a filter pattern does not match a log event. This value can be null.
dimensions (dict) --
The fields to use as dimensions for the metric. One metric filter can include as many as three dimensions.
(string) --
(string) --
unit (string) --
The unit to assign to the metric. If you omit this, the unit is set as None.
boolean
This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.
If the log group uses either a log-group level or account-level transformer, and you specify true, the metric filter will be applied on the transformed version of the log events instead of the original ingested log events.
None
{'applyOnTransformedLogs': 'boolean'}
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination created with PutDestination that belongs to a different account, for cross-account delivery. We currently support Kinesis Data Streams and Firehose as logical destinations.
An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.
Using regular expressions to create subscription filters is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in subscription filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
To perform a PutSubscriptionFilter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.
See also: AWS API Documentation
Request Syntax
client.put_subscription_filter( logGroupName='string', filterName='string', filterPattern='string', destinationArn='string', roleArn='string', distribution='Random'|'ByLogStream', applyOnTransformedLogs=True|False )
string
[REQUIRED]
The name of the log group.
string
[REQUIRED]
A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in filterName. To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters.
string
[REQUIRED]
A filter pattern for subscribing to a filtered stream of log events.
string
[REQUIRED]
The ARN of the destination to deliver matching log events to. Currently, the supported destinations are:
An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination (specified using an ARN) belonging to a different account, for cross-account delivery. If you're setting up a cross-account subscription, the destination must have an IAM policy associated with it. The IAM policy must allow the sender to send logs to the destination. For more information, see PutDestinationPolicy.
A Kinesis Data Firehose delivery stream belonging to the same account as the subscription filter, for same-account delivery.
A Lambda function belonging to the same account as the subscription filter, for same-account delivery.
string
The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don't need to provide the ARN when you are working with a logical destination for cross-account delivery.
string
The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable when the destination is an Amazon Kinesis data stream.
boolean
This parameter is valid only for log groups that have an active log transformer. For more information about log transformers, see PutTransformer.
If the log group uses either a log-group level or account-level transformer, and you specify true, the subscription filter will be applied on the transformed version of the log events instead of the original ingested log events.
None