2021/08/04 - Amazon Transcribe Service - 9 new4 updated api methods
Changes This release adds support for call analytics (batch) within Amazon Transcribe.
Retrieves information about a call analytics category.
See also: AWS API Documentation
Request Syntax
client.get_call_analytics_category( CategoryName='string' )
string
[REQUIRED]
The name of the category you want information about. This value is case sensitive.
dict
Response Syntax
{ 'CategoryProperties': { 'CategoryName': 'string', 'Rules': [ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ], 'CreateTime': datetime(2015, 1, 1), 'LastUpdateTime': datetime(2015, 1, 1) } }
Response Structure
(dict) --
CategoryProperties (dict) --
The rules you've defined for a category.
CategoryName (string) --
The name of the call analytics category.
Rules (list) --
The rules used to create a call analytics category.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) --
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) --
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) --
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
CreateTime (datetime) --
A timestamp that shows when the call analytics category was created.
LastUpdateTime (datetime) --
A timestamp that shows when the call analytics category was most recently updated.
Starts an asynchronous analytics job that not only transcribes the audio recording of a caller and agent, but also returns additional insights. These insights include how quickly or loudly the caller or agent was speaking. To retrieve additional insights with your analytics jobs, create categories. A category is a way to classify analytics jobs based on attributes, such as a customer's sentiment or a particular phrase being used during the call. For more information, see the operation.
See also: AWS API Documentation
Request Syntax
client.start_call_analytics_job( CallAnalyticsJobName='string', Media={ 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, OutputLocation='string', OutputEncryptionKMSKeyId='string', DataAccessRoleArn='string', Settings={ 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'ContentRedaction': { 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, 'LanguageOptions': [ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ] }, ChannelDefinitions=[ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] )
string
[REQUIRED]
The name of the call analytics job. You can't use the string "." or ".." by themselves as the job name. The name must also be unique within an AWS account. If you try to create a call analytics job with the same name as a previous call analytics job, you get a ConflictException error.
dict
[REQUIRED]
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
string
The Amazon S3 location where the output of the call analytics job is stored. You can provide the following location types to store the output of call analytics job:
s3://DOC-EXAMPLE-BUCKET1 If you specify a bucket, Amazon Transcribe saves the output of the analytics job as a JSON file at the root level of the bucket.
s3://DOC-EXAMPLE-BUCKET1/folder/ f you specify a path, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLE-BUCKET1/folder/your-transcription-job-name.json If you specify a folder, you must provide a trailing slash.
s3://DOC-EXAMPLE-BUCKET1/folder/filename.json If you provide a path that has the filename specified, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLEBUCKET1/folder/filename.json
You can specify an AWS Key Management Service key to encrypt the output of our analytics job using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of the analytics job output that is placed in your S3 bucket.
string
The Amazon Resource Name (ARN) of the AWS Key Management Service key used to encrypt the output of the call analytics job. The user calling the operation must have permission to use the specified KMS key.
You use either of the following to identify an AWS KMS key in the current account:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef1234567890ab"
ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the call analytics job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputLocation parameter.
string
[REQUIRED]
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains your input files. Amazon Transcribe assumes this role to read queued audio files. If you have specified an output S3 bucket for your transcription results, this role should have access to the output bucket as well.
dict
A Settings object that provides optional settings for a call analytics job.
VocabularyName (string) --
The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
LanguageModelName (string) --
The structure used to describe a custom language model.
ContentRedaction (dict) --
Settings for content redaction within a transcription job.
RedactionType (string) -- [REQUIRED]
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) -- [REQUIRED]
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
LanguageOptions (list) --
When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio.
The following list shows the supported languages and corresponding language codes for call analytics jobs:
Gulf Arabic (ar-AE)
Mandarin Chinese, Mainland (zh-CN)
Australian English (en-AU)
British English (en-GB)
Indian English (en-IN)
Irish English (en-IE)
Scottish English (en-AB)
US English (en-US)
Welsh English (en-WL)
Spanish (es-ES)
US Spanish (es-US)
French (fr-FR)
Canadian French (fr-CA)
German (de-DE)
Swiss German (de-CH)
Indian Hindi (hi-IN)
Italian (it-IT)
Japanese (ja-JP)
Korean (ko-KR)
Portuguese (pt-PT)
Brazilian Portuguese (pt-BR)
(string) --
list
When you start a call analytics job, you must pass an array that maps the agent and the customer to specific audio channels. The values you can assign to a channel are 0 and 1. The agent and the customer must each have their own channel. You can't assign more than one channel to an agent or customer.
(dict) --
For a call analytics job, an object that indicates the audio channel that belongs to the agent and the audio channel that belongs to the customer.
ChannelId (integer) --
A value that indicates the audio channel.
ParticipantRole (string) --
Indicates whether the person speaking on the audio channel is the agent or customer.
dict
Response Syntax
{ 'CallAnalyticsJob': { 'CallAnalyticsJobName': 'string', 'CallAnalyticsJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string', 'RedactedTranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'DataAccessRoleArn': 'string', 'IdentifiedLanguageScore': ..., 'Settings': { 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'ContentRedaction': { 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, 'LanguageOptions': [ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ] }, 'ChannelDefinitions': [ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] } }
Response Structure
(dict) --
CallAnalyticsJob (dict) --
An object containing the details of the asynchronous call analytics job.
CallAnalyticsJobName (string) --
The name of the call analytics job.
CallAnalyticsJobStatus (string) --
The status of the analytics job.
LanguageCode (string) --
If you know the language spoken between the customer and the agent, specify a language code for this field.
If you don't know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio.
The following list shows the supported languages and corresponding language codes for call analytics jobs:
Gulf Arabic (ar-AE)
Mandarin Chinese, Mainland (zh-CN)
Australian English (en-AU)
British English (en-GB)
Indian English (en-IN)
Irish English (en-IE)
Scottish English (en-AB)
US English (en-US)
Welsh English (en-WL)
Spanish (es-ES)
US Spanish (es-US)
French (fr-FR)
Canadian French (fr-CA)
German (de-DE)
Swiss German (de-CH)
Indian Hindi (hi-IN)
Italian (it-IT)
Japanese (ja-JP)
Korean (ko-KR)
Portuguese (pt-PT)
Brazilian Portuguese (pt-BR)
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the audio.
MediaFormat (string) --
The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.
Media (dict) --
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
Identifies the location of a transcription.
TranscriptFileUri (string) --
The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
RedactedTranscriptFileUri (string) --
The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime (datetime) --
A timestamp that shows when the analytics job started processing.
CreationTime (datetime) --
A timestamp that shows when the analytics job was created.
CompletionTime (datetime) --
A timestamp that shows when the analytics job was completed.
FailureReason (string) --
If the AnalyticsJobStatus is FAILED, this field contains information about why the job failed.
The FailureReason field can contain one of the following values:
Unsupported media format: The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format: The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file: The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate: The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide
Invalid number of channels: number of channels too large: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference.
DataAccessRoleArn (string) --
The Amazon Resource Number (ARN) that you use to get access to the analytics job.
IdentifiedLanguageScore (float) --
A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don't provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified
Settings (dict) --
Provides information about the settings used to run a transcription job.
VocabularyName (string) --
The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
LanguageModelName (string) --
The structure used to describe a custom language model.
ContentRedaction (dict) --
Settings for content redaction within a transcription job.
RedactionType (string) --
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) --
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
LanguageOptions (list) --
When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio.
The following list shows the supported languages and corresponding language codes for call analytics jobs:
Gulf Arabic (ar-AE)
Mandarin Chinese, Mainland (zh-CN)
Australian English (en-AU)
British English (en-GB)
Indian English (en-IN)
Irish English (en-IE)
Scottish English (en-AB)
US English (en-US)
Welsh English (en-WL)
Spanish (es-ES)
US Spanish (es-US)
French (fr-FR)
Canadian French (fr-CA)
German (de-DE)
Swiss German (de-CH)
Indian Hindi (hi-IN)
Italian (it-IT)
Japanese (ja-JP)
Korean (ko-KR)
Portuguese (pt-PT)
Brazilian Portuguese (pt-BR)
(string) --
ChannelDefinitions (list) --
Shows numeric values to indicate the channel assigned to the agent's audio and the channel assigned to the customer's audio.
(dict) --
For a call analytics job, an object that indicates the audio channel that belongs to the agent and the audio channel that belongs to the customer.
ChannelId (integer) --
A value that indicates the audio channel.
ParticipantRole (string) --
Indicates whether the person speaking on the audio channel is the agent or customer.
Deletes a call analytics job using its name.
See also: AWS API Documentation
Request Syntax
client.delete_call_analytics_job( CallAnalyticsJobName='string' )
string
[REQUIRED]
The name of the call analytics job you want to delete.
dict
Response Syntax
{}
Response Structure
(dict) --
List call analytics jobs with a specified status or substring that matches their names.
See also: AWS API Documentation
Request Syntax
client.list_call_analytics_jobs( Status='QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', JobNameContains='string', NextToken='string', MaxResults=123 )
string
When specified, returns only call analytics jobs with the specified status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all analytics jobs ordered by creation date.
string
When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
string
If you receive a truncated result in the previous request of , include NextToken to fetch the next set of jobs.
integer
The maximum number of call analytics jobs to return in the response. If there are fewer results in the list, this response contains only the actual results.
dict
Response Syntax
{ 'Status': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'NextToken': 'string', 'CallAnalyticsJobSummaries': [ { 'CallAnalyticsJobName': 'string', 'CreationTime': datetime(2015, 1, 1), 'StartTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'CallAnalyticsJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'FailureReason': 'string' }, ] }
Response Structure
(dict) --
Status (string) --
When specified, returns only call analytics jobs with that status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.
NextToken (string) --
The operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in your next request to the operation to return next page of jobs.
CallAnalyticsJobSummaries (list) --
A list of objects containing summary information for a transcription job.
(dict) --
Provides summary information about a call analytics job.
CallAnalyticsJobName (string) --
The name of the call analytics job.
CreationTime (datetime) --
A timestamp that shows when the call analytics job was created.
StartTime (datetime) --
A timestamp that shows when the job began processing.
CompletionTime (datetime) --
A timestamp that shows when the job was completed.
LanguageCode (string) --
The language of the transcript in the source audio file.
CallAnalyticsJobStatus (string) --
The status of the call analytics job.
FailureReason (string) --
If the CallAnalyticsJobStatus is FAILED, a description of the error.
Provides more information about the call analytics categories that you've created. You can use the information in this list to find a specific category. You can then use the operation to get more information about it.
See also: AWS API Documentation
Request Syntax
client.list_call_analytics_categories( NextToken='string', MaxResults=123 )
string
When included, ``NextToken``fetches the next set of categories if the result of the previous request was truncated.
integer
The maximum number of categories to return in the response. If there are fewer results in the list, the response contains only the actual results.
dict
Response Syntax
{ 'NextToken': 'string', 'Categories': [ { 'CategoryName': 'string', 'Rules': [ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ], 'CreateTime': datetime(2015, 1, 1), 'LastUpdateTime': datetime(2015, 1, 1) }, ] }
Response Structure
(dict) --
NextToken (string) --
The operation returns a page of jobs at a time. The maximum size of the list is set by the MaxResults parameter. If there are more categories in the list than the page size, Amazon Transcribe returns the NextPage token. Include the token in the next request to the operation to return the next page of analytics categories.
Categories (list) --
A list of objects containing information about analytics categories.
(dict) --
An object that contains the rules and additional information about a call analytics category.
CategoryName (string) --
The name of the call analytics category.
Rules (list) --
The rules used to create a call analytics category.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) --
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) --
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) --
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
CreateTime (datetime) --
A timestamp that shows when the call analytics category was created.
LastUpdateTime (datetime) --
A timestamp that shows when the call analytics category was most recently updated.
Updates the call analytics category with new values. The UpdateCallAnalyticsCategory operation overwrites all of the existing information with the values that you provide in the request.
See also: AWS API Documentation
Request Syntax
client.update_call_analytics_category( CategoryName='string', Rules=[ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ] )
string
[REQUIRED]
The name of the analytics category to update. The name is case sensitive. If you try to update a call analytics category with the same name as a previous category you will receive a ConflictException error.
list
[REQUIRED]
The rules used for the updated analytics category. The rules that you provide in this field replace the ones that are currently being used.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) -- [REQUIRED]
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) -- [REQUIRED]
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) -- [REQUIRED]
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
dict
Response Syntax
{ 'CategoryProperties': { 'CategoryName': 'string', 'Rules': [ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ], 'CreateTime': datetime(2015, 1, 1), 'LastUpdateTime': datetime(2015, 1, 1) } }
Response Structure
(dict) --
CategoryProperties (dict) --
The attributes describing the analytics category. You can see information such as the rules that you've used to update the category and when the category was originally created.
CategoryName (string) --
The name of the call analytics category.
Rules (list) --
The rules used to create a call analytics category.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) --
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) --
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) --
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
CreateTime (datetime) --
A timestamp that shows when the call analytics category was created.
LastUpdateTime (datetime) --
A timestamp that shows when the call analytics category was most recently updated.
Creates an analytics category. Amazon Transcribe applies the conditions specified by your analytics categories to your call analytics jobs. For each analytics category, you specify one or more rules. For example, you can specify a rule that the customer sentiment was neutral or negative within that category. If you start a call analytics job, Amazon Transcribe applies the category to the analytics job that you've specified.
See also: AWS API Documentation
Request Syntax
client.create_call_analytics_category( CategoryName='string', Rules=[ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ] )
string
[REQUIRED]
The name that you choose for your category when you create it.
list
[REQUIRED]
To create a category, you must specify between 1 and 20 rules. For each rule, you specify a filter to be applied to the attributes of the call. For example, you can specify a sentiment filter to detect if the customer's sentiment was negative or neutral.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) -- [REQUIRED]
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) -- [REQUIRED]
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) -- [REQUIRED]
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
dict
Response Syntax
{ 'CategoryProperties': { 'CategoryName': 'string', 'Rules': [ { 'NonTalkTimeFilter': { 'Threshold': 123, 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'InterruptionFilter': { 'Threshold': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'Negate': True|False }, 'TranscriptFilter': { 'TranscriptFilterType': 'EXACT', 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False, 'Targets': [ 'string', ] }, 'SentimentFilter': { 'Sentiments': [ 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', ], 'AbsoluteTimeRange': { 'StartTime': 123, 'EndTime': 123, 'First': 123, 'Last': 123 }, 'RelativeTimeRange': { 'StartPercentage': 123, 'EndPercentage': 123, 'First': 123, 'Last': 123 }, 'ParticipantRole': 'AGENT'|'CUSTOMER', 'Negate': True|False } }, ], 'CreateTime': datetime(2015, 1, 1), 'LastUpdateTime': datetime(2015, 1, 1) } }
Response Structure
(dict) --
CategoryProperties (dict) --
The rules and associated metadata used to create a category.
CategoryName (string) --
The name of the call analytics category.
Rules (list) --
The rules used to create a call analytics category.
(dict) --
A condition in the call between the customer and the agent that you want to filter for.
NonTalkTimeFilter (dict) --
A condition for a time period when neither the customer nor the agent was talking.
Threshold (integer) --
The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period when people were talking.
InterruptionFilter (dict) --
A condition for a time period when either the customer or agent was interrupting the other person.
Threshold (integer) --
The duration of the interruption.
ParticipantRole (string) --
Indicates whether the caller or customer was interrupting.
AbsoluteTimeRange (dict) --
An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
Negate (boolean) --
Set to TRUE to look for a time period where there was no interruption.
TranscriptFilter (dict) --
A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType (string) --
Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
AbsoluteTimeRange (dict) --
A time range, set in seconds, between two points in the call.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
Determines whether the customer or the agent is speaking the phrases that you've specified.
Negate (boolean) --
If TRUE, the rule that you specify is applied to everything except for the phrases that you specify.
Targets (list) --
The phrases that you're specifying for the transcript filter to match.
(string) --
SentimentFilter (dict) --
A condition that is applied to a particular customer sentiment.
Sentiments (list) --
An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
(string) --
AbsoluteTimeRange (dict) --
The time range, measured in seconds, of the sentiment.
StartTime (integer) --
A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
EndTime (integer) --
A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
StartTime - 10000
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
First (integer) --
A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last (integer) --
A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange (dict) --
The time range, set in percentages, that correspond to proportion of the call.
StartPercentage (integer) --
A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
EndPercentage (integer) --
A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
StartPercentage - 10
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
First (integer) --
A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify 120000, the time range is set for the first 120,000 milliseconds of the call.
Last (integer) --
A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify 120000, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole (string) --
A value that determines whether the sentiment belongs to the customer or the agent.
Negate (boolean) --
Set to TRUE to look for sentiments that weren't specified in the request.
CreateTime (datetime) --
A timestamp that shows when the call analytics category was created.
LastUpdateTime (datetime) --
A timestamp that shows when the call analytics category was most recently updated.
Returns information about a call analytics job. To see the status of the job, check the CallAnalyticsJobStatus field. If the status is COMPLETED, the job is finished and you can find the results at the location specified in the TranscriptFileUri field. If you enable personally identifiable information (PII) redaction, the redacted transcript appears in the RedactedTranscriptFileUri field.
See also: AWS API Documentation
Request Syntax
client.get_call_analytics_job( CallAnalyticsJobName='string' )
string
[REQUIRED]
The name of the analytics job you want information about. This value is case sensitive.
dict
Response Syntax
{ 'CallAnalyticsJob': { 'CallAnalyticsJobName': 'string', 'CallAnalyticsJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string', 'RedactedTranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'DataAccessRoleArn': 'string', 'IdentifiedLanguageScore': ..., 'Settings': { 'VocabularyName': 'string', 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag', 'LanguageModelName': 'string', 'ContentRedaction': { 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, 'LanguageOptions': [ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ] }, 'ChannelDefinitions': [ { 'ChannelId': 123, 'ParticipantRole': 'AGENT'|'CUSTOMER' }, ] } }
Response Structure
(dict) --
CallAnalyticsJob (dict) --
An object that contains the results of your call analytics job.
CallAnalyticsJobName (string) --
The name of the call analytics job.
CallAnalyticsJobStatus (string) --
The status of the analytics job.
LanguageCode (string) --
If you know the language spoken between the customer and the agent, specify a language code for this field.
If you don't know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio.
The following list shows the supported languages and corresponding language codes for call analytics jobs:
Gulf Arabic (ar-AE)
Mandarin Chinese, Mainland (zh-CN)
Australian English (en-AU)
British English (en-GB)
Indian English (en-IN)
Irish English (en-IE)
Scottish English (en-AB)
US English (en-US)
Welsh English (en-WL)
Spanish (es-ES)
US Spanish (es-US)
French (fr-FR)
Canadian French (fr-CA)
German (de-DE)
Swiss German (de-CH)
Indian Hindi (hi-IN)
Italian (it-IT)
Japanese (ja-JP)
Korean (ko-KR)
Portuguese (pt-PT)
Brazilian Portuguese (pt-BR)
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the audio.
MediaFormat (string) --
The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.
Media (dict) --
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
Identifies the location of a transcription.
TranscriptFileUri (string) --
The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
RedactedTranscriptFileUri (string) --
The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime (datetime) --
A timestamp that shows when the analytics job started processing.
CreationTime (datetime) --
A timestamp that shows when the analytics job was created.
CompletionTime (datetime) --
A timestamp that shows when the analytics job was completed.
FailureReason (string) --
If the AnalyticsJobStatus is FAILED, this field contains information about why the job failed.
The FailureReason field can contain one of the following values:
Unsupported media format: The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format: The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file: The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate: The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide
Invalid number of channels: number of channels too large: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference.
DataAccessRoleArn (string) --
The Amazon Resource Number (ARN) that you use to get access to the analytics job.
IdentifiedLanguageScore (float) --
A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don't provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified
Settings (dict) --
Provides information about the settings used to run a transcription job.
VocabularyName (string) --
The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
LanguageModelName (string) --
The structure used to describe a custom language model.
ContentRedaction (dict) --
Settings for content redaction within a transcription job.
RedactionType (string) --
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) --
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
LanguageOptions (list) --
When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio.
The following list shows the supported languages and corresponding language codes for call analytics jobs:
Gulf Arabic (ar-AE)
Mandarin Chinese, Mainland (zh-CN)
Australian English (en-AU)
British English (en-GB)
Indian English (en-IN)
Irish English (en-IE)
Scottish English (en-AB)
US English (en-US)
Welsh English (en-WL)
Spanish (es-ES)
US Spanish (es-US)
French (fr-FR)
Canadian French (fr-CA)
German (de-DE)
Swiss German (de-CH)
Indian Hindi (hi-IN)
Italian (it-IT)
Japanese (ja-JP)
Korean (ko-KR)
Portuguese (pt-PT)
Brazilian Portuguese (pt-BR)
(string) --
ChannelDefinitions (list) --
Shows numeric values to indicate the channel assigned to the agent's audio and the channel assigned to the customer's audio.
(dict) --
For a call analytics job, an object that indicates the audio channel that belongs to the agent and the audio channel that belongs to the customer.
ChannelId (integer) --
A value that indicates the audio channel.
ParticipantRole (string) --
Indicates whether the person speaking on the audio channel is the agent or customer.
Deletes a call analytics category using its name.
See also: AWS API Documentation
Request Syntax
client.delete_call_analytics_category( CategoryName='string' )
string
[REQUIRED]
The name of the call analytics category that you're choosing to delete. The value is case sensitive.
dict
Response Syntax
{}
Response Structure
(dict) --
{'MedicalTranscriptionJob': {'Media': {'RedactedMediaFileUri': 'string'}}}
Returns information about a transcription job from Amazon Transcribe Medical. To see the status of the job, check the TranscriptionJobStatus field. If the status is COMPLETED, the job is finished. You find the results of the completed job in the TranscriptFileUri field.
See also: AWS API Documentation
Request Syntax
client.get_medical_transcription_job( MedicalTranscriptionJobName='string' )
string
[REQUIRED]
The name of the medical transcription job.
dict
Response Syntax
{ 'MedicalTranscriptionJob': { 'MedicalTranscriptionJobName': 'string', 'TranscriptionJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'Settings': { 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyName': 'string' }, 'ContentIdentificationType': 'PHI', 'Specialty': 'PRIMARYCARE', 'Type': 'CONVERSATION'|'DICTATION' } }
Response Structure
(dict) --
MedicalTranscriptionJob (dict) --
An object that contains the results of the medical transcription job.
MedicalTranscriptionJobName (string) --
The name for a given medical transcription job.
TranscriptionJobStatus (string) --
The completion status of a medical transcription job.
LanguageCode (string) --
The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a BadRequestException error.
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the source audio containing medical information.
If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MedicalMediaSampleHertz blank and let Amazon Transcribe Medical determine the sample rate.
MediaFormat (string) --
The format of the input media file.
Media (dict) --
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
An object that contains the MedicalTranscript. The MedicalTranscript contains the TranscriptFileUri.
TranscriptFileUri (string) --
The S3 object location of the medical transcript.
Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.
StartTime (datetime) --
A timestamp that shows when the job started processing.
CreationTime (datetime) --
A timestamp that shows when the job was created.
CompletionTime (datetime) --
A timestamp that shows when the job was completed.
FailureReason (string) --
If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.
The FailureReason field contains one of the following values:
Unsupported media format- The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format- The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file- The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate- The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide
Invalid number of channels: number of channels too large- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference
Settings (dict) --
Object that contains object.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels in the MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException
ShowAlternatives (boolean) --
Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The maximum number of alternatives that you tell the service to return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyName (string) --
The name of the vocabulary to use when processing a medical transcription job.
ContentIdentificationType (string) --
Shows the type of content that you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is PHI, you've configured the job to identify personal health information (PHI) in the transcription output.
Specialty (string) --
The medical specialty of any clinicians providing a dictation or having a conversation. PRIMARYCARE is the only available setting for this object. This specialty enables you to generate transcriptions for the following medical fields:
Family Medicine
Type (string) --
The type of speech in the transcription job. CONVERSATION is generally used for patient-physician dialogues. DICTATION is the setting for physicians speaking their notes after seeing a patient. For more information, see What is Amazon Transcribe Medical?.
{'TranscriptionJob': {'Media': {'RedactedMediaFileUri': 'string'}}}
Returns information about a transcription job. To see the status of the job, check the TranscriptionJobStatus field. If the status is COMPLETED, the job is finished and you can find the results at the location specified in the TranscriptFileUri field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri.
See also: AWS API Documentation
Request Syntax
client.get_transcription_job( TranscriptionJobName='string' )
string
[REQUIRED]
The name of the job.
dict
Response Syntax
{ 'TranscriptionJob': { 'TranscriptionJobName': 'string', 'TranscriptionJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string', 'RedactedTranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'Settings': { 'VocabularyName': 'string', 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag' }, 'ModelSettings': { 'LanguageModelName': 'string' }, 'JobExecutionSettings': { 'AllowDeferredExecution': True|False, 'DataAccessRoleArn': 'string' }, 'ContentRedaction': { 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, 'IdentifyLanguage': True|False, 'LanguageOptions': [ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ], 'IdentifiedLanguageScore': ... } }
Response Structure
(dict) --
TranscriptionJob (dict) --
An object that contains the results of the transcription job.
TranscriptionJobName (string) --
The name of the transcription job.
TranscriptionJobStatus (string) --
The status of the transcription job.
LanguageCode (string) --
The language code for the input speech.
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the audio track in the input media file.
MediaFormat (string) --
The format of the input media file.
Media (dict) --
An object that describes the input media for the transcription job.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
An object that describes the output of the transcription job.
TranscriptFileUri (string) --
The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
RedactedTranscriptFileUri (string) --
The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime (datetime) --
A timestamp that shows with the job was started processing.
CreationTime (datetime) --
A timestamp that shows when the job was created.
CompletionTime (datetime) --
A timestamp that shows when the job was completed.
FailureReason (string) --
If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.
The FailureReason field can contain one of the following values:
Unsupported media format - The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format - The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure that the two values match.
Invalid sample rate for audio file - The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate - The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large - The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide.
Invalid number of channels: number of channels too large - Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.
Settings (dict) --
Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.
VocabularyName (string) --
The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
ShowAlternatives (boolean) --
Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
ModelSettings (dict) --
An object containing the details of your custom language model.
LanguageModelName (string) --
The name of your custom language model.
JobExecutionSettings (dict) --
Provides information about how a transcription job is executed.
AllowDeferredExecution (boolean) --
Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException exception.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
ContentRedaction (dict) --
An object that describes content redaction settings for the transcription job.
RedactionType (string) --
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) --
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
IdentifyLanguage (boolean) --
A value that shows if automatic language identification was enabled for a transcription job.
LanguageOptions (list) --
An object that shows the optional array of languages inputted for transcription jobs with automatic language identification enabled.
(string) --
IdentifiedLanguageScore (float) --
A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. Larger values indicate that Amazon Transcribe has higher confidence in the language it identified.
{'Media': {'RedactedMediaFileUri': 'string'}}Response
{'MedicalTranscriptionJob': {'Media': {'RedactedMediaFileUri': 'string'}}}
Starts a batch job to transcribe medical speech to text.
See also: AWS API Documentation
Request Syntax
client.start_medical_transcription_job( MedicalTranscriptionJobName='string', LanguageCode='af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', MediaSampleRateHertz=123, MediaFormat='mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', Media={ 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, OutputBucketName='string', OutputKey='string', OutputEncryptionKMSKeyId='string', Settings={ 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyName': 'string' }, ContentIdentificationType='PHI', Specialty='PRIMARYCARE', Type='CONVERSATION'|'DICTATION' )
string
[REQUIRED]
The name of the medical transcription job. You can't use the strings " ." or " .." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a medical transcription job with the same name as a previous medical transcription job, you get a ConflictException error.
string
[REQUIRED]
The language code for the language spoken in the input media file. US English (en-US) is the valid value for medical transcription jobs. Any other value you enter for language code results in a BadRequestException error.
integer
The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe Medical determines the sample rate. If you specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MediaSampleRateHertz field blank and let Amazon Transcribe Medical determine the sample rate.
string
The audio format of the input media file.
dict
[REQUIRED]
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
string
[REQUIRED]
The Amazon S3 location where the transcription is stored.
You must set OutputBucketName for Amazon Transcribe Medical to store the transcription results. Your transcript appears in the S3 location you specify. When you call the GetMedicalTranscriptionJob, the operation returns this location in the TranscriptFileUri field. The S3 bucket must have permissions that allow Amazon Transcribe Medical to put files in the bucket. For more information, see Permissions Required for IAM User Roles.
You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe Medical uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.
string
You can specify a location in an Amazon S3 bucket to store the output of your medical transcription job.
If you don't specify an output key, Amazon Transcribe Medical stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".
You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".
If you specify an output key, you must also specify an S3 bucket in the OutputBucketName parameter.
string
The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartMedicalTranscriptionJob operation must have permission to use the specified KMS key.
You use either of the following to identify a KMS key in the current account:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the medical transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName parameter.
dict
Optional settings for the medical transcription job.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels in the MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException
ShowAlternatives (boolean) --
Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The maximum number of alternatives that you tell the service to return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyName (string) --
The name of the vocabulary to use when processing a medical transcription job.
string
You can configure Amazon Transcribe Medical to label content in the transcription output. If you specify PHI, Amazon Transcribe Medical labels the personal health information (PHI) that it identifies in the transcription output.
string
[REQUIRED]
The medical specialty of any clinician speaking in the input media.
string
[REQUIRED]
The type of speech in the input audio. CONVERSATION refers to conversations between two or more speakers, e.g., a conversations between doctors and patients. DICTATION refers to single-speaker dictated speech, e.g., for clinical notes.
dict
Response Syntax
{ 'MedicalTranscriptionJob': { 'MedicalTranscriptionJobName': 'string', 'TranscriptionJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'Settings': { 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyName': 'string' }, 'ContentIdentificationType': 'PHI', 'Specialty': 'PRIMARYCARE', 'Type': 'CONVERSATION'|'DICTATION' } }
Response Structure
(dict) --
MedicalTranscriptionJob (dict) --
A batch job submitted to transcribe medical speech to text.
MedicalTranscriptionJobName (string) --
The name for a given medical transcription job.
TranscriptionJobStatus (string) --
The completion status of a medical transcription job.
LanguageCode (string) --
The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a BadRequestException error.
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the source audio containing medical information.
If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the MedicalMediaSampleHertz blank and let Amazon Transcribe Medical determine the sample rate.
MediaFormat (string) --
The format of the input media file.
Media (dict) --
Describes the input media file in a transcription request.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
An object that contains the MedicalTranscript. The MedicalTranscript contains the TranscriptFileUri.
TranscriptFileUri (string) --
The S3 object location of the medical transcript.
Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.
StartTime (datetime) --
A timestamp that shows when the job started processing.
CreationTime (datetime) --
A timestamp that shows when the job was created.
CompletionTime (datetime) --
A timestamp that shows when the job was completed.
FailureReason (string) --
If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.
The FailureReason field contains one of the following values:
Unsupported media format- The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format- The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure the two values match.
Invalid sample rate for audio file- The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate- The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide
Invalid number of channels: number of channels too large- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference
Settings (dict) --
Object that contains object.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels in the MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException
ShowAlternatives (boolean) --
Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The maximum number of alternatives that you tell the service to return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyName (string) --
The name of the vocabulary to use when processing a medical transcription job.
ContentIdentificationType (string) --
Shows the type of content that you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is PHI, you've configured the job to identify personal health information (PHI) in the transcription output.
Specialty (string) --
The medical specialty of any clinicians providing a dictation or having a conversation. PRIMARYCARE is the only available setting for this object. This specialty enables you to generate transcriptions for the following medical fields:
Family Medicine
Type (string) --
The type of speech in the transcription job. CONVERSATION is generally used for patient-physician dialogues. DICTATION is the setting for physicians speaking their notes after seeing a patient. For more information, see What is Amazon Transcribe Medical?.
{'Media': {'RedactedMediaFileUri': 'string'}}Response
{'TranscriptionJob': {'Media': {'RedactedMediaFileUri': 'string'}}}
Starts an asynchronous job to transcribe speech to text.
See also: AWS API Documentation
Request Syntax
client.start_transcription_job( TranscriptionJobName='string', LanguageCode='af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', MediaSampleRateHertz=123, MediaFormat='mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', Media={ 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, OutputBucketName='string', OutputKey='string', OutputEncryptionKMSKeyId='string', Settings={ 'VocabularyName': 'string', 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag' }, ModelSettings={ 'LanguageModelName': 'string' }, JobExecutionSettings={ 'AllowDeferredExecution': True|False, 'DataAccessRoleArn': 'string' }, ContentRedaction={ 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, IdentifyLanguage=True|False, LanguageOptions=[ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ] )
string
[REQUIRED]
The name of the job. You can't use the strings " ." or " .." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a transcription job with the same name as a previous transcription job, you get a ConflictException error.
string
The language code for the language used in the input media file.
To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16000 Hz or higher.
integer
The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe determines the sample rate. If you specify the sample rate, it must match the sample rate detected by Amazon Transcribe. In most cases, you should leave the MediaSampleRateHertz field blank and let Amazon Transcribe determine the sample rate.
string
The format of the input media file.
dict
[REQUIRED]
An object that describes the input media for a transcription job.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
string
The location where the transcription is stored.
If you set the OutputBucketName, Amazon Transcribe puts the transcript in the specified S3 bucket. When you call the GetTranscriptionJob operation, the operation returns this location in the TranscriptFileUri field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri. If you enable content redaction and choose to output an unredacted transcript, that transcript's location still appears in the TranscriptFileUri. The S3 bucket must have permissions that allow Amazon Transcribe to put files in the bucket. For more information, see Permissions Required for IAM User Roles.
You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the OutputEncryptionKMSKeyId parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.
If you don't set the OutputBucketName, Amazon Transcribe generates a pre-signed URL, a shareable URL that provides secure access to your transcription, and returns it in the TranscriptFileUri field. Use this URL to download the transcription.
string
You can specify a location in an Amazon S3 bucket to store the output of your transcription job.
If you don't specify an output key, Amazon Transcribe stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".
You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".
If you specify an output key, you must also specify an S3 bucket in the OutputBucketName parameter.
string
The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartTranscriptionJob operation must have permission to use the specified KMS key.
You can use either of the following to identify a KMS key in the current account:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the OutputBucketName parameter.
dict
A Settings object that provides optional settings for a transcription job.
VocabularyName (string) --
The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
ShowAlternatives (boolean) --
Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
dict
Choose the custom language model you use for your transcription job in this parameter.
LanguageModelName (string) --
The name of your custom language model.
dict
Provides information about how a transcription job is executed. Use this field to indicate that the job can be queued for deferred execution if the concurrency limit is reached and there are no slots available to immediately run the job.
AllowDeferredExecution (boolean) --
Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException exception.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
dict
An object that contains the request parameters for content redaction.
RedactionType (string) -- [REQUIRED]
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) -- [REQUIRED]
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
boolean
Set this field to true to enable automatic language identification. Automatic language identification is disabled by default. You receive a BadRequestException error if you enter a value for a LanguageCode.
list
An object containing a list of languages that might be present in your collection of audio files. Automatic language identification chooses a language that best matches the source audio from that list.
To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16000 Hz or higher.
(string) --
dict
Response Syntax
{ 'TranscriptionJob': { 'TranscriptionJobName': 'string', 'TranscriptionJobStatus': 'QUEUED'|'IN_PROGRESS'|'FAILED'|'COMPLETED', 'LanguageCode': 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', 'MediaSampleRateHertz': 123, 'MediaFormat': 'mp3'|'mp4'|'wav'|'flac'|'ogg'|'amr'|'webm', 'Media': { 'MediaFileUri': 'string', 'RedactedMediaFileUri': 'string' }, 'Transcript': { 'TranscriptFileUri': 'string', 'RedactedTranscriptFileUri': 'string' }, 'StartTime': datetime(2015, 1, 1), 'CreationTime': datetime(2015, 1, 1), 'CompletionTime': datetime(2015, 1, 1), 'FailureReason': 'string', 'Settings': { 'VocabularyName': 'string', 'ShowSpeakerLabels': True|False, 'MaxSpeakerLabels': 123, 'ChannelIdentification': True|False, 'ShowAlternatives': True|False, 'MaxAlternatives': 123, 'VocabularyFilterName': 'string', 'VocabularyFilterMethod': 'remove'|'mask'|'tag' }, 'ModelSettings': { 'LanguageModelName': 'string' }, 'JobExecutionSettings': { 'AllowDeferredExecution': True|False, 'DataAccessRoleArn': 'string' }, 'ContentRedaction': { 'RedactionType': 'PII', 'RedactionOutput': 'redacted'|'redacted_and_unredacted' }, 'IdentifyLanguage': True|False, 'LanguageOptions': [ 'af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN', ], 'IdentifiedLanguageScore': ... } }
Response Structure
(dict) --
TranscriptionJob (dict) --
An object containing details of the asynchronous transcription job.
TranscriptionJobName (string) --
The name of the transcription job.
TranscriptionJobStatus (string) --
The status of the transcription job.
LanguageCode (string) --
The language code for the input speech.
MediaSampleRateHertz (integer) --
The sample rate, in Hertz, of the audio track in the input media file.
MediaFormat (string) --
The format of the input media file.
Media (dict) --
An object that describes the input media for the transcription job.
MediaFileUri (string) --
The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri (string) --
The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript (dict) --
An object that describes the output of the transcription job.
TranscriptFileUri (string) --
The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
RedactedTranscriptFileUri (string) --
The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the OutputBucketName field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime (datetime) --
A timestamp that shows with the job was started processing.
CreationTime (datetime) --
A timestamp that shows when the job was created.
CompletionTime (datetime) --
A timestamp that shows when the job was completed.
FailureReason (string) --
If the TranscriptionJobStatus field is FAILED, this field contains information about why the job failed.
The FailureReason field can contain one of the following values:
Unsupported media format - The media format specified in the MediaFormat field of the request isn't valid. See the description of the MediaFormat field for a list of valid values.
The media format provided does not match the detected media format - The media format of the audio file doesn't match the format specified in the MediaFormat field in the request. Check the media format of your media file and make sure that the two values match.
Invalid sample rate for audio file - The sample rate specified in the MediaSampleRateHertz of the request isn't valid. The sample rate must be between 8000 and 48000 Hertz.
The sample rate provided does not match the detected sample rate - The sample rate in the audio file doesn't match the sample rate specified in the MediaSampleRateHertz field in the request. Check the sample rate of your media file and make sure that the two values match.
Invalid file size: file size too large - The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide.
Invalid number of channels: number of channels too large - Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.
Settings (dict) --
Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.
VocabularyName (string) --
The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels (boolean) --
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels field.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
MaxSpeakerLabels (integer) --
The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.
ChannelIdentification (boolean) --
Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.
ShowAlternatives (boolean) --
Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.
MaxAlternatives (integer) --
The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.
VocabularyFilterName (string) --
The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod (string) --
Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.
ModelSettings (dict) --
An object containing the details of your custom language model.
LanguageModelName (string) --
The name of your custom language model.
JobExecutionSettings (dict) --
Provides information about how a transcription job is executed.
AllowDeferredExecution (boolean) --
Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException exception.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution field, you must specify the DataAccessRoleArn field.
ContentRedaction (dict) --
An object that describes content redaction settings for the transcription job.
RedactionType (string) --
Request parameter that defines the entities to be redacted. The only accepted value is PII.
RedactionOutput (string) --
The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose redacted Amazon Transcribe outputs only the redacted transcript.
When you choose redacted_and_unredacted Amazon Transcribe outputs both the redacted and unredacted transcripts.
IdentifyLanguage (boolean) --
A value that shows if automatic language identification was enabled for a transcription job.
LanguageOptions (list) --
An object that shows the optional array of languages inputted for transcription jobs with automatic language identification enabled.
(string) --
IdentifiedLanguageScore (float) --
A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. Larger values indicate that Amazon Transcribe has higher confidence in the language it identified.