Amazon Chime SDK Media Pipelines

2023/08/31 - Amazon Chime SDK Media Pipelines - 9 updated api methods

Changes  This release adds support for feature Voice Enhancement for Call Recording as part of Amazon Chime SDK call analytics.

CreateMediaCapturePipeline (updated) Link ¶
Changes (response)
{'MediaCapturePipeline': {'Status': {'NotStarted'}}}

Creates a media pipeline.

See also: AWS API Documentation

Request Syntax

client.create_media_capture_pipeline(
    SourceType='ChimeSdkMeeting',
    SourceArn='string',
    SinkType='S3Bucket',
    SinkArn='string',
    ClientRequestToken='string',
    ChimeSdkMeetingConfiguration={
        'SourceConfiguration': {
            'SelectedVideoStreams': {
                'AttendeeIds': [
                    'string',
                ],
                'ExternalUserIds': [
                    'string',
                ]
            }
        },
        'ArtifactsConfiguration': {
            'Audio': {
                'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo'
            },
            'Video': {
                'State': 'Enabled'|'Disabled',
                'MuxType': 'VideoOnly'
            },
            'Content': {
                'State': 'Enabled'|'Disabled',
                'MuxType': 'ContentOnly'
            },
            'CompositedVideo': {
                'Layout': 'GridView',
                'Resolution': 'HD'|'FHD',
                'GridViewConfiguration': {
                    'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                    'PresenterOnlyConfiguration': {
                        'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                    },
                    'ActiveSpeakerOnlyConfiguration': {
                        'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                    },
                    'HorizontalLayoutConfiguration': {
                        'TileOrder': 'JoinSequence'|'SpeakerSequence',
                        'TilePosition': 'Top'|'Bottom',
                        'TileCount': 123,
                        'TileAspectRatio': 'string'
                    },
                    'VerticalLayoutConfiguration': {
                        'TileOrder': 'JoinSequence'|'SpeakerSequence',
                        'TilePosition': 'Left'|'Right',
                        'TileCount': 123,
                        'TileAspectRatio': 'string'
                    },
                    'VideoAttribute': {
                        'CornerRadius': 123,
                        'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                        'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                        'BorderThickness': 123
                    },
                    'CanvasOrientation': 'Landscape'|'Portrait'
                }
            }
        }
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type SourceType

string

param SourceType

[REQUIRED]

Source type from which the media artifacts are captured. A Chime SDK Meeting is the only supported source.

type SourceArn

string

param SourceArn

[REQUIRED]

ARN of the source from which the media artifacts are captured.

type SinkType

string

param SinkType

[REQUIRED]

Destination type to which the media artifacts are saved. You must use an S3 bucket.

type SinkArn

string

param SinkArn

[REQUIRED]

The ARN of the sink type.

type ClientRequestToken

string

param ClientRequestToken

The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media pipeline request.

This field is autopopulated if not provided.

type ChimeSdkMeetingConfiguration

dict

param ChimeSdkMeetingConfiguration

The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting .

  • SourceConfiguration (dict) --

    The source configuration for a specified media pipeline.

    • SelectedVideoStreams (dict) --

      The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

      • AttendeeIds (list) --

        The attendee IDs of the streams selected for a media pipeline.

        • (string) --

      • ExternalUserIds (list) --

        The external user IDs of the streams selected for a media pipeline.

        • (string) --

  • ArtifactsConfiguration (dict) --

    The configuration for the artifacts in an Amazon Chime SDK meeting.

    • Audio (dict) -- [REQUIRED]

      The configuration for the audio artifacts.

      • MuxType (string) -- [REQUIRED]

        The MUX type of the audio artifact configuration object.

    • Video (dict) -- [REQUIRED]

      The configuration for the video artifacts.

      • State (string) -- [REQUIRED]

        Indicates whether the video artifact is enabled or disabled.

      • MuxType (string) --

        The MUX type of the video artifact configuration object.

    • Content (dict) -- [REQUIRED]

      The configuration for the content artifacts.

      • State (string) -- [REQUIRED]

        Indicates whether the content artifact is enabled or disabled.

      • MuxType (string) --

        The MUX type of the artifact configuration.

    • CompositedVideo (dict) --

      Enables video compositing.

      • Layout (string) --

        The layout setting, such as GridView in the configuration object.

      • Resolution (string) --

        The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

      • GridViewConfiguration (dict) -- [REQUIRED]

        The GridView configuration setting.

        • ContentShareLayout (string) -- [REQUIRED]

          Defines the layout of the video tiles when content sharing is enabled.

        • PresenterOnlyConfiguration (dict) --

          Defines the configuration options for a presenter only video tile.

          • PresenterPosition (string) --

            Defines the position of the presenter video tile. Default: TopRight .

        • ActiveSpeakerOnlyConfiguration (dict) --

          The configuration settings for an ActiveSpeakerOnly video tile.

          • ActiveSpeakerPosition (string) --

            The position of the ActiveSpeakerOnly video tile.

        • HorizontalLayoutConfiguration (dict) --

          The configuration settings for a horizontal layout.

          • TileOrder (string) --

            Sets the automatic ordering of the video tiles.

          • TilePosition (string) --

            Sets the position of horizontal tiles.

          • TileCount (integer) --

            The maximum number of video tiles to display.

          • TileAspectRatio (string) --

            Sets the aspect ratio of the video tiles, such as 16:9.

        • VerticalLayoutConfiguration (dict) --

          The configuration settings for a vertical layout.

          • TileOrder (string) --

            Sets the automatic ordering of the video tiles.

          • TilePosition (string) --

            Sets the position of vertical tiles.

          • TileCount (integer) --

            The maximum number of tiles to display.

          • TileAspectRatio (string) --

            Sets the aspect ratio of the video tiles, such as 16:9.

        • VideoAttribute (dict) --

          The attribute settings for the video tiles.

          • CornerRadius (integer) --

            Sets the corner radius of all video tiles.

          • BorderColor (string) --

            Defines the border color of all video tiles.

          • HighlightColor (string) --

            Defines the highlight color for the active video tile.

          • BorderThickness (integer) --

            Defines the border thickness for all video tiles.

        • CanvasOrientation (string) --

          The orientation setting, horizontal or vertical.

type Tags

list

param Tags

The tag key-value pairs.

  • (dict) --

    A key/value pair that grants users access to meeting resources.

    • Key (string) -- [REQUIRED]

      The key half of a tag.

    • Value (string) -- [REQUIRED]

      The value half of a tag.

rtype

dict

returns

Response Syntax

{
    'MediaCapturePipeline': {
        'MediaPipelineId': 'string',
        'MediaPipelineArn': 'string',
        'SourceType': 'ChimeSdkMeeting',
        'SourceArn': 'string',
        'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
        'SinkType': 'S3Bucket',
        'SinkArn': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1),
        'ChimeSdkMeetingConfiguration': {
            'SourceConfiguration': {
                'SelectedVideoStreams': {
                    'AttendeeIds': [
                        'string',
                    ],
                    'ExternalUserIds': [
                        'string',
                    ]
                }
            },
            'ArtifactsConfiguration': {
                'Audio': {
                    'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo'
                },
                'Video': {
                    'State': 'Enabled'|'Disabled',
                    'MuxType': 'VideoOnly'
                },
                'Content': {
                    'State': 'Enabled'|'Disabled',
                    'MuxType': 'ContentOnly'
                },
                'CompositedVideo': {
                    'Layout': 'GridView',
                    'Resolution': 'HD'|'FHD',
                    'GridViewConfiguration': {
                        'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                        'PresenterOnlyConfiguration': {
                            'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'ActiveSpeakerOnlyConfiguration': {
                            'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'HorizontalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Top'|'Bottom',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VerticalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Left'|'Right',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VideoAttribute': {
                            'CornerRadius': 123,
                            'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'BorderThickness': 123
                        },
                        'CanvasOrientation': 'Landscape'|'Portrait'
                    }
                }
            }
        }
    }
}

Response Structure

  • (dict) --

    • MediaCapturePipeline (dict) --

      A media pipeline object, the ID, source type, source ARN, sink type, and sink ARN of a media pipeline object.

      • MediaPipelineId (string) --

        The ID of a media pipeline.

      • MediaPipelineArn (string) --

        The ARN of the media capture pipeline

      • SourceType (string) --

        Source type from which media artifacts are saved. You must use ChimeMeeting .

      • SourceArn (string) --

        ARN of the source from which the media artifacts are saved.

      • Status (string) --

        The status of the media pipeline.

      • SinkType (string) --

        Destination type to which the media artifacts are saved. You must use an S3 Bucket.

      • SinkArn (string) --

        ARN of the destination to which the media artifacts are saved.

      • CreatedTimestamp (datetime) --

        The time at which the pipeline was created, in ISO 8601 format.

      • UpdatedTimestamp (datetime) --

        The time at which the pipeline was updated, in ISO 8601 format.

      • ChimeSdkMeetingConfiguration (dict) --

        The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting .

        • SourceConfiguration (dict) --

          The source configuration for a specified media pipeline.

          • SelectedVideoStreams (dict) --

            The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

            • AttendeeIds (list) --

              The attendee IDs of the streams selected for a media pipeline.

              • (string) --

            • ExternalUserIds (list) --

              The external user IDs of the streams selected for a media pipeline.

              • (string) --

        • ArtifactsConfiguration (dict) --

          The configuration for the artifacts in an Amazon Chime SDK meeting.

          • Audio (dict) --

            The configuration for the audio artifacts.

            • MuxType (string) --

              The MUX type of the audio artifact configuration object.

          • Video (dict) --

            The configuration for the video artifacts.

            • State (string) --

              Indicates whether the video artifact is enabled or disabled.

            • MuxType (string) --

              The MUX type of the video artifact configuration object.

          • Content (dict) --

            The configuration for the content artifacts.

            • State (string) --

              Indicates whether the content artifact is enabled or disabled.

            • MuxType (string) --

              The MUX type of the artifact configuration.

          • CompositedVideo (dict) --

            Enables video compositing.

            • Layout (string) --

              The layout setting, such as GridView in the configuration object.

            • Resolution (string) --

              The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

            • GridViewConfiguration (dict) --

              The GridView configuration setting.

              • ContentShareLayout (string) --

                Defines the layout of the video tiles when content sharing is enabled.

              • PresenterOnlyConfiguration (dict) --

                Defines the configuration options for a presenter only video tile.

                • PresenterPosition (string) --

                  Defines the position of the presenter video tile. Default: TopRight .

              • ActiveSpeakerOnlyConfiguration (dict) --

                The configuration settings for an ActiveSpeakerOnly video tile.

                • ActiveSpeakerPosition (string) --

                  The position of the ActiveSpeakerOnly video tile.

              • HorizontalLayoutConfiguration (dict) --

                The configuration settings for a horizontal layout.

                • TileOrder (string) --

                  Sets the automatic ordering of the video tiles.

                • TilePosition (string) --

                  Sets the position of horizontal tiles.

                • TileCount (integer) --

                  The maximum number of video tiles to display.

                • TileAspectRatio (string) --

                  Sets the aspect ratio of the video tiles, such as 16:9.

              • VerticalLayoutConfiguration (dict) --

                The configuration settings for a vertical layout.

                • TileOrder (string) --

                  Sets the automatic ordering of the video tiles.

                • TilePosition (string) --

                  Sets the position of vertical tiles.

                • TileCount (integer) --

                  The maximum number of tiles to display.

                • TileAspectRatio (string) --

                  Sets the aspect ratio of the video tiles, such as 16:9.

              • VideoAttribute (dict) --

                The attribute settings for the video tiles.

                • CornerRadius (integer) --

                  Sets the corner radius of all video tiles.

                • BorderColor (string) --

                  Defines the border color of all video tiles.

                • HighlightColor (string) --

                  Defines the highlight color for the active video tile.

                • BorderThickness (integer) --

                  Defines the border thickness for all video tiles.

              • CanvasOrientation (string) --

                The orientation setting, horizontal or vertical.

CreateMediaConcatenationPipeline (updated) Link ¶
Changes (response)
{'MediaConcatenationPipeline': {'Status': {'NotStarted'}}}

Creates a media concatenation pipeline.

See also: AWS API Documentation

Request Syntax

client.create_media_concatenation_pipeline(
    Sources=[
        {
            'Type': 'MediaCapturePipeline',
            'MediaCapturePipelineSourceConfiguration': {
                'MediaPipelineArn': 'string',
                'ChimeSdkMeetingConfiguration': {
                    'ArtifactsConfiguration': {
                        'Audio': {
                            'State': 'Enabled'
                        },
                        'Video': {
                            'State': 'Enabled'|'Disabled'
                        },
                        'Content': {
                            'State': 'Enabled'|'Disabled'
                        },
                        'DataChannel': {
                            'State': 'Enabled'|'Disabled'
                        },
                        'TranscriptionMessages': {
                            'State': 'Enabled'|'Disabled'
                        },
                        'MeetingEvents': {
                            'State': 'Enabled'|'Disabled'
                        },
                        'CompositedVideo': {
                            'State': 'Enabled'|'Disabled'
                        }
                    }
                }
            }
        },
    ],
    Sinks=[
        {
            'Type': 'S3Bucket',
            'S3BucketSinkConfiguration': {
                'Destination': 'string'
            }
        },
    ],
    ClientRequestToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type Sources

list

param Sources

[REQUIRED]

An object that specifies the sources for the media concatenation pipeline.

  • (dict) --

    The source type and media pipeline configuration settings in a configuration object.

    • Type (string) -- [REQUIRED]

      The type of concatenation source in a configuration object.

    • MediaCapturePipelineSourceConfiguration (dict) -- [REQUIRED]

      The concatenation settings for the media pipeline in a configuration object.

      • MediaPipelineArn (string) -- [REQUIRED]

        The media pipeline ARN in the configuration object of a media capture pipeline.

      • ChimeSdkMeetingConfiguration (dict) -- [REQUIRED]

        The meeting configuration settings in a media capture pipeline configuration object.

        • ArtifactsConfiguration (dict) -- [REQUIRED]

          The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

          • Audio (dict) -- [REQUIRED]

            The configuration for the audio artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • Video (dict) -- [REQUIRED]

            The configuration for the video artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • Content (dict) -- [REQUIRED]

            The configuration for the content artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • DataChannel (dict) -- [REQUIRED]

            The configuration for the data channel artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • TranscriptionMessages (dict) -- [REQUIRED]

            The configuration for the transcription messages artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • MeetingEvents (dict) -- [REQUIRED]

            The configuration for the meeting events artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

          • CompositedVideo (dict) -- [REQUIRED]

            The configuration for the composited video artifacts concatenation.

            • State (string) -- [REQUIRED]

              Enables or disables the configuration object.

type Sinks

list

param Sinks

[REQUIRED]

An object that specifies the data sinks for the media concatenation pipeline.

  • (dict) --

    The data sink of the configuration object.

    • Type (string) -- [REQUIRED]

      The type of data sink in the configuration object.

    • S3BucketSinkConfiguration (dict) -- [REQUIRED]

      The configuration settings for an Amazon S3 bucket sink.

      • Destination (string) -- [REQUIRED]

        The destination URL of the S3 bucket.

type ClientRequestToken

string

param ClientRequestToken

The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media concatenation pipeline request.

This field is autopopulated if not provided.

type Tags

list

param Tags

The tags associated with the media concatenation pipeline.

  • (dict) --

    A key/value pair that grants users access to meeting resources.

    • Key (string) -- [REQUIRED]

      The key half of a tag.

    • Value (string) -- [REQUIRED]

      The value half of a tag.

rtype

dict

returns

Response Syntax

{
    'MediaConcatenationPipeline': {
        'MediaPipelineId': 'string',
        'MediaPipelineArn': 'string',
        'Sources': [
            {
                'Type': 'MediaCapturePipeline',
                'MediaCapturePipelineSourceConfiguration': {
                    'MediaPipelineArn': 'string',
                    'ChimeSdkMeetingConfiguration': {
                        'ArtifactsConfiguration': {
                            'Audio': {
                                'State': 'Enabled'
                            },
                            'Video': {
                                'State': 'Enabled'|'Disabled'
                            },
                            'Content': {
                                'State': 'Enabled'|'Disabled'
                            },
                            'DataChannel': {
                                'State': 'Enabled'|'Disabled'
                            },
                            'TranscriptionMessages': {
                                'State': 'Enabled'|'Disabled'
                            },
                            'MeetingEvents': {
                                'State': 'Enabled'|'Disabled'
                            },
                            'CompositedVideo': {
                                'State': 'Enabled'|'Disabled'
                            }
                        }
                    }
                }
            },
        ],
        'Sinks': [
            {
                'Type': 'S3Bucket',
                'S3BucketSinkConfiguration': {
                    'Destination': 'string'
                }
            },
        ],
        'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • MediaConcatenationPipeline (dict) --

      A media concatenation pipeline object, the ID, source type, MediaPipelineARN , and sink of a media concatenation pipeline object.

      • MediaPipelineId (string) --

        The ID of the media pipeline being concatenated.

      • MediaPipelineArn (string) --

        The ARN of the media pipeline that you specify in the SourceConfiguration object.

      • Sources (list) --

        The data sources being concatenated.

        • (dict) --

          The source type and media pipeline configuration settings in a configuration object.

          • Type (string) --

            The type of concatenation source in a configuration object.

          • MediaCapturePipelineSourceConfiguration (dict) --

            The concatenation settings for the media pipeline in a configuration object.

            • MediaPipelineArn (string) --

              The media pipeline ARN in the configuration object of a media capture pipeline.

            • ChimeSdkMeetingConfiguration (dict) --

              The meeting configuration settings in a media capture pipeline configuration object.

              • ArtifactsConfiguration (dict) --

                The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

                • Audio (dict) --

                  The configuration for the audio artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • Video (dict) --

                  The configuration for the video artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • Content (dict) --

                  The configuration for the content artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • DataChannel (dict) --

                  The configuration for the data channel artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • TranscriptionMessages (dict) --

                  The configuration for the transcription messages artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • MeetingEvents (dict) --

                  The configuration for the meeting events artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

                • CompositedVideo (dict) --

                  The configuration for the composited video artifacts concatenation.

                  • State (string) --

                    Enables or disables the configuration object.

      • Sinks (list) --

        The data sinks of the concatenation pipeline.

        • (dict) --

          The data sink of the configuration object.

          • Type (string) --

            The type of data sink in the configuration object.

          • S3BucketSinkConfiguration (dict) --

            The configuration settings for an Amazon S3 bucket sink.

            • Destination (string) --

              The destination URL of the S3 bucket.

      • Status (string) --

        The status of the concatenation pipeline.

      • CreatedTimestamp (datetime) --

        The time at which the concatenation pipeline was created.

      • UpdatedTimestamp (datetime) --

        The time at which the concatenation pipeline was last updated.

CreateMediaInsightsPipeline (updated) Link ¶
Changes (response)
{'MediaInsightsPipeline': {'ElementStatuses': [{'Status': 'NotStarted | '
                                                          'NotSupported | '
                                                          'Initializing | '
                                                          'InProgress | Failed '
                                                          '| Stopping | '
                                                          'Stopped | Paused',
                                                'Type': 'AmazonTranscribeCallAnalyticsProcessor '
                                                        '| '
                                                        'VoiceAnalyticsProcessor '
                                                        '| '
                                                        'AmazonTranscribeProcessor '
                                                        '| '
                                                        'KinesisDataStreamSink '
                                                        '| LambdaFunctionSink '
                                                        '| SqsQueueSink | '
                                                        'SnsTopicSink | '
                                                        'S3RecordingSink | '
                                                        'VoiceEnhancementSink'}],
                           'Status': {'NotStarted'}}}

Creates a media insights pipeline.

See also: AWS API Documentation

Request Syntax

client.create_media_insights_pipeline(
    MediaInsightsPipelineConfigurationArn='string',
    KinesisVideoStreamSourceRuntimeConfiguration={
        'Streams': [
            {
                'StreamArn': 'string',
                'FragmentNumber': 'string',
                'StreamChannelDefinition': {
                    'NumberOfChannels': 123,
                    'ChannelDefinitions': [
                        {
                            'ChannelId': 123,
                            'ParticipantRole': 'AGENT'|'CUSTOMER'
                        },
                    ]
                }
            },
        ],
        'MediaEncoding': 'pcm',
        'MediaSampleRate': 123
    },
    MediaInsightsRuntimeMetadata={
        'string': 'string'
    },
    KinesisVideoStreamRecordingSourceRuntimeConfiguration={
        'Streams': [
            {
                'StreamArn': 'string'
            },
        ],
        'FragmentSelector': {
            'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp',
            'TimestampRange': {
                'StartTimestamp': datetime(2015, 1, 1),
                'EndTimestamp': datetime(2015, 1, 1)
            }
        }
    },
    S3RecordingSinkRuntimeConfiguration={
        'Destination': 'string',
        'RecordingFileFormat': 'Wav'|'Opus'
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    ClientRequestToken='string'
)
type MediaInsightsPipelineConfigurationArn

string

param MediaInsightsPipelineConfigurationArn

[REQUIRED]

The ARN of the pipeline's configuration.

type KinesisVideoStreamSourceRuntimeConfiguration

dict

param KinesisVideoStreamSourceRuntimeConfiguration

The runtime configuration for the Kinesis video stream source of the media insights pipeline.

  • Streams (list) -- [REQUIRED]

    The streams in the source runtime configuration of a Kinesis video stream.

    • (dict) --

      The configuration settings for a stream.

      • StreamArn (string) -- [REQUIRED]

        The ARN of the stream.

      • FragmentNumber (string) --

        The unique identifier of the fragment to begin processing.

      • StreamChannelDefinition (dict) -- [REQUIRED]

        The streaming channel definition in the stream configuration.

        • NumberOfChannels (integer) -- [REQUIRED]

          The number of channels in a streaming channel.

        • ChannelDefinitions (list) --

          The definitions of the channels in a streaming channel.

          • (dict) --

            Defines an audio channel in a Kinesis video stream.

            • ChannelId (integer) -- [REQUIRED]

              The channel ID.

            • ParticipantRole (string) --

              Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER .

  • MediaEncoding (string) -- [REQUIRED]

    Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

    For more information, see Media formats in the Amazon Transcribe Developer Guide .

  • MediaSampleRate (integer) -- [REQUIRED]

    The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

    Valid Range: Minimum value of 8000. Maximum value of 48000.

type MediaInsightsRuntimeMetadata

dict

param MediaInsightsRuntimeMetadata

The runtime metadata for the media insights pipeline. Consists of a key-value map of strings.

  • (string) --

    • (string) --

type KinesisVideoStreamRecordingSourceRuntimeConfiguration

dict

param KinesisVideoStreamRecordingSourceRuntimeConfiguration

The runtime configuration for the Kinesis video recording stream source.

  • Streams (list) -- [REQUIRED]

    The stream or streams to be recorded.

    • (dict) --

      A structure that holds the settings for recording media.

      • StreamArn (string) --

        The ARN of the recording stream.

  • FragmentSelector (dict) -- [REQUIRED]

    Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

    • FragmentSelectorType (string) -- [REQUIRED]

      The origin of the timestamps to use, Server or Producer . For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide .

    • TimestampRange (dict) -- [REQUIRED]

      The range of timestamps to return.

      • StartTimestamp (datetime) -- [REQUIRED]

        The starting timestamp for the specified range.

      • EndTimestamp (datetime) -- [REQUIRED]

        The ending timestamp for the specified range.

type S3RecordingSinkRuntimeConfiguration

dict

param S3RecordingSinkRuntimeConfiguration

The runtime configuration for the S3 recording sink. If specified, the settings in this structure override any settings in S3RecordingSinkConfiguration .

  • Destination (string) -- [REQUIRED]

    The URI of the S3 bucket used as the sink.

  • RecordingFileFormat (string) -- [REQUIRED]

    The file format for the media files sent to the Amazon S3 bucket.

type Tags

list

param Tags

The tags assigned to the media insights pipeline.

  • (dict) --

    A key/value pair that grants users access to meeting resources.

    • Key (string) -- [REQUIRED]

      The key half of a tag.

    • Value (string) -- [REQUIRED]

      The value half of a tag.

type ClientRequestToken

string

param ClientRequestToken

The unique identifier for the media insights pipeline request.

This field is autopopulated if not provided.

rtype

dict

returns

Response Syntax

{
    'MediaInsightsPipeline': {
        'MediaPipelineId': 'string',
        'MediaPipelineArn': 'string',
        'MediaInsightsPipelineConfigurationArn': 'string',
        'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
        'KinesisVideoStreamSourceRuntimeConfiguration': {
            'Streams': [
                {
                    'StreamArn': 'string',
                    'FragmentNumber': 'string',
                    'StreamChannelDefinition': {
                        'NumberOfChannels': 123,
                        'ChannelDefinitions': [
                            {
                                'ChannelId': 123,
                                'ParticipantRole': 'AGENT'|'CUSTOMER'
                            },
                        ]
                    }
                },
            ],
            'MediaEncoding': 'pcm',
            'MediaSampleRate': 123
        },
        'MediaInsightsRuntimeMetadata': {
            'string': 'string'
        },
        'KinesisVideoStreamRecordingSourceRuntimeConfiguration': {
            'Streams': [
                {
                    'StreamArn': 'string'
                },
            ],
            'FragmentSelector': {
                'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp',
                'TimestampRange': {
                    'StartTimestamp': datetime(2015, 1, 1),
                    'EndTimestamp': datetime(2015, 1, 1)
                }
            }
        },
        'S3RecordingSinkRuntimeConfiguration': {
            'Destination': 'string',
            'RecordingFileFormat': 'Wav'|'Opus'
        },
        'CreatedTimestamp': datetime(2015, 1, 1),
        'ElementStatuses': [
            {
                'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                'Status': 'NotStarted'|'NotSupported'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'
            },
        ]
    }
}

Response Structure

  • (dict) --

    • MediaInsightsPipeline (dict) --

      The media insights pipeline object.

      • MediaPipelineId (string) --

        The ID of a media insights pipeline.

      • MediaPipelineArn (string) --

        The ARN of a media insights pipeline.

      • MediaInsightsPipelineConfigurationArn (string) --

        The ARN of a media insight pipeline's configuration settings.

      • Status (string) --

        The status of a media insights pipeline.

      • KinesisVideoStreamSourceRuntimeConfiguration (dict) --

        The configuration settings for a Kinesis runtime video stream in a media insights pipeline.

        • Streams (list) --

          The streams in the source runtime configuration of a Kinesis video stream.

          • (dict) --

            The configuration settings for a stream.

            • StreamArn (string) --

              The ARN of the stream.

            • FragmentNumber (string) --

              The unique identifier of the fragment to begin processing.

            • StreamChannelDefinition (dict) --

              The streaming channel definition in the stream configuration.

              • NumberOfChannels (integer) --

                The number of channels in a streaming channel.

              • ChannelDefinitions (list) --

                The definitions of the channels in a streaming channel.

                • (dict) --

                  Defines an audio channel in a Kinesis video stream.

                  • ChannelId (integer) --

                    The channel ID.

                  • ParticipantRole (string) --

                    Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER .

        • MediaEncoding (string) --

          Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

          For more information, see Media formats in the Amazon Transcribe Developer Guide .

        • MediaSampleRate (integer) --

          The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

          Valid Range: Minimum value of 8000. Maximum value of 48000.

      • MediaInsightsRuntimeMetadata (dict) --

        The runtime metadata of a media insights pipeline.

        • (string) --

          • (string) --

      • KinesisVideoStreamRecordingSourceRuntimeConfiguration (dict) --

        The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline.

        • Streams (list) --

          The stream or streams to be recorded.

          • (dict) --

            A structure that holds the settings for recording media.

            • StreamArn (string) --

              The ARN of the recording stream.

        • FragmentSelector (dict) --

          Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

          • FragmentSelectorType (string) --

            The origin of the timestamps to use, Server or Producer . For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide .

          • TimestampRange (dict) --

            The range of timestamps to return.

            • StartTimestamp (datetime) --

              The starting timestamp for the specified range.

            • EndTimestamp (datetime) --

              The ending timestamp for the specified range.

      • S3RecordingSinkRuntimeConfiguration (dict) --

        The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline.

        • Destination (string) --

          The URI of the S3 bucket used as the sink.

        • RecordingFileFormat (string) --

          The file format for the media files sent to the Amazon S3 bucket.

      • CreatedTimestamp (datetime) --

        The time at which the media insights pipeline was created.

      • ElementStatuses (list) --

        The statuses that the elements in a media insights pipeline can have during data processing.

        • (dict) --

          The status of the pipeline element.

          • Type (string) --

            The type of status.

          • Status (string) --

            The element's status.

CreateMediaInsightsPipelineConfiguration (updated) Link ¶
Changes (request, response)
Request
{'Elements': {'Type': {'VoiceEnhancementSink'},
              'VoiceEnhancementSinkConfiguration': {'Disabled': 'boolean'}}}
Response
{'MediaInsightsPipelineConfiguration': {'Elements': {'Type': {'VoiceEnhancementSink'},
                                                     'VoiceEnhancementSinkConfiguration': {'Disabled': 'boolean'}}}}

A structure that contains the static configurations for a media insights pipeline.

See also: AWS API Documentation

Request Syntax

client.create_media_insights_pipeline_configuration(
    MediaInsightsPipelineConfigurationName='string',
    ResourceAccessRoleArn='string',
    RealTimeAlertConfiguration={
        'Disabled': True|False,
        'Rules': [
            {
                'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                'KeywordMatchConfiguration': {
                    'RuleName': 'string',
                    'Keywords': [
                        'string',
                    ],
                    'Negate': True|False
                },
                'SentimentConfiguration': {
                    'RuleName': 'string',
                    'SentimentType': 'NEGATIVE',
                    'TimePeriod': 123
                },
                'IssueDetectionConfiguration': {
                    'RuleName': 'string'
                }
            },
        ]
    },
    Elements=[
        {
            'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
            'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'LanguageModelName': 'string',
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'FilterPartialResults': True|False,
                'PostCallAnalyticsSettings': {
                    'OutputLocation': 'string',
                    'DataAccessRoleArn': 'string',
                    'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                    'OutputEncryptionKMSKeyId': 'string'
                },
                'CallAnalyticsStreamCategories': [
                    'string',
                ]
            },
            'AmazonTranscribeProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'ShowSpeakerLabel': True|False,
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'LanguageModelName': 'string',
                'FilterPartialResults': True|False,
                'IdentifyLanguage': True|False,
                'LanguageOptions': 'string',
                'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyNames': 'string',
                'VocabularyFilterNames': 'string'
            },
            'KinesisDataStreamSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'S3RecordingSinkConfiguration': {
                'Destination': 'string',
                'RecordingFileFormat': 'Wav'|'Opus'
            },
            'VoiceAnalyticsProcessorConfiguration': {
                'SpeakerSearchStatus': 'Enabled'|'Disabled',
                'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
            },
            'LambdaFunctionSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SqsQueueSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SnsTopicSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'VoiceEnhancementSinkConfiguration': {
                'Disabled': True|False
            }
        },
    ],
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    ClientRequestToken='string'
)
type MediaInsightsPipelineConfigurationName

string

param MediaInsightsPipelineConfigurationName

[REQUIRED]

The name of the media insights pipeline configuration.

type ResourceAccessRoleArn

string

param ResourceAccessRoleArn

[REQUIRED]

The ARN of the role used by the service to access Amazon Web Services resources, including Transcribe and Transcribe Call Analytics , on the caller’s behalf.

type RealTimeAlertConfiguration

dict

param RealTimeAlertConfiguration

The configuration settings for the real-time alerts in a media insights pipeline configuration.

  • Disabled (boolean) --

    Turns off real-time alerts.

  • Rules (list) --

    The rules in the alert. Rules specify the words or phrases that you want to be notified about.

    • (dict) --

      Specifies the words or phrases that trigger an alert.

      • Type (string) -- [REQUIRED]

        The type of alert rule.

      • KeywordMatchConfiguration (dict) --

        Specifies the settings for matching the keywords in a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the keyword match rule.

        • Keywords (list) -- [REQUIRED]

          The keywords or phrases that you want to match.

          • (string) --

        • Negate (boolean) --

          Matches keywords or phrases on their presence or absence. If set to TRUE , the rule matches when all the specified keywords or phrases are absent. Default: FALSE .

      • SentimentConfiguration (dict) --

        Specifies the settings for predicting sentiment in a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the rule in the sentiment configuration.

        • SentimentType (string) -- [REQUIRED]

          The type of sentiment, POSITIVE , NEGATIVE , or NEUTRAL .

        • TimePeriod (integer) -- [REQUIRED]

          Specifies the analysis interval.

      • IssueDetectionConfiguration (dict) --

        Specifies the issue detection settings for a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the issue detection rule.

type Elements

list

param Elements

[REQUIRED]

The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.

  • (dict) --

    An element in a media insights pipeline configuration.

    • Type (string) -- [REQUIRED]

      The element type.

    • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) --

      The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

      • LanguageCode (string) -- [REQUIRED]

        The language code in the configuration.

      • VocabularyName (string) --

        Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

        If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

        For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide .

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterName (string) --

        Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

        If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

        For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide .

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterMethod (string) --

        Specifies how to apply a vocabulary filter to a transcript.

        To replace words with *** , choose mask .

        To delete words, choose remove .

        To flag words without changing them, choose tag .

      • LanguageModelName (string) --

        Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide .

      • EnablePartialResultsStabilization (boolean) --

        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • PartialResultsStability (string) --

        Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • ContentIdentificationType (string) --

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • ContentRedactionType (string) --

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • PiiEntityTypes (string) --

        Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

        Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

        Length Constraints: Minimum length of 1. Maximum length of 300.

      • FilterPartialResults (boolean) --

        If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

      • PostCallAnalyticsSettings (dict) --

        The settings for a post-call analysis task in an analytics configuration.

        • OutputLocation (string) -- [REQUIRED]

          The URL of the Amazon S3 bucket that contains the post-call data.

        • DataAccessRoleArn (string) -- [REQUIRED]

          The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide .

        • ContentRedactionOutput (string) --

          The content redaction output settings for a post-call analysis task.

        • OutputEncryptionKMSKeyId (string) --

          The ID of the KMS (Key Management Service) key used to encrypt the output.

      • CallAnalyticsStreamCategories (list) --

        By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

        • (string) --

    • AmazonTranscribeProcessorConfiguration (dict) --

      The transcription processor configuration settings in a media insights pipeline configuration element.

      • LanguageCode (string) --

        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide .

      • VocabularyName (string) --

        The name of the custom vocabulary that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterName (string) --

        The name of the custom vocabulary filter that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterMethod (string) --

        The vocabulary filtering method used in your Call Analytics transcription.

      • ShowSpeakerLabel (boolean) --

        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide .

      • EnablePartialResultsStabilization (boolean) --

        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • PartialResultsStability (string) --

        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • ContentIdentificationType (string) --

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • ContentRedactionType (string) --

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • PiiEntityTypes (string) --

        The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

        Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

        If you leave this parameter empty, the default behavior is equivalent to ALL .

      • LanguageModelName (string) --

        The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide .

      • FilterPartialResults (boolean) --

        If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

      • IdentifyLanguage (boolean) --

        Turns language identification on or off.

      • LanguageOptions (string) --

        The language options for the transcription, such as automatic language detection.

      • PreferredLanguage (string) --

        The preferred language for the transcription.

      • VocabularyNames (string) --

        The names of the custom vocabulary or vocabularies used during transcription.

      • VocabularyFilterNames (string) --

        The names of the custom vocabulary filter or filters using during transcription.

    • KinesisDataStreamSinkConfiguration (dict) --

      The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the sink.

    • S3RecordingSinkConfiguration (dict) --

      The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

      • Destination (string) --

        The default URI of the Amazon S3 bucket used as the recording sink.

      • RecordingFileFormat (string) --

        The default file format for the media files sent to the Amazon S3 bucket.

    • VoiceAnalyticsProcessorConfiguration (dict) --

      The voice analytics configuration settings in a media insights pipeline configuration element.

      • SpeakerSearchStatus (string) --

        The status of the speaker search task.

      • VoiceToneAnalysisStatus (string) --

        The status of the voice tone analysis task.

    • LambdaFunctionSinkConfiguration (dict) --

      The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the sink.

    • SqsQueueSinkConfiguration (dict) --

      The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the SQS sink.

    • SnsTopicSinkConfiguration (dict) --

      The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the SNS sink.

    • VoiceEnhancementSinkConfiguration (dict) --

      The configuration settings for the VoiceEnhancementSinkConfiguration element.

      • Disabled (boolean) --

        Disables the VoiceEnhancementSinkConfiguration element.

type Tags

list

param Tags

The tags assigned to the media insights pipeline configuration.

  • (dict) --

    A key/value pair that grants users access to meeting resources.

    • Key (string) -- [REQUIRED]

      The key half of a tag.

    • Value (string) -- [REQUIRED]

      The value half of a tag.

type ClientRequestToken

string

param ClientRequestToken

The unique identifier for the media insights pipeline configuration request.

This field is autopopulated if not provided.

rtype

dict

returns

Response Syntax

{
    'MediaInsightsPipelineConfiguration': {
        'MediaInsightsPipelineConfigurationName': 'string',
        'MediaInsightsPipelineConfigurationArn': 'string',
        'ResourceAccessRoleArn': 'string',
        'RealTimeAlertConfiguration': {
            'Disabled': True|False,
            'Rules': [
                {
                    'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                    'KeywordMatchConfiguration': {
                        'RuleName': 'string',
                        'Keywords': [
                            'string',
                        ],
                        'Negate': True|False
                    },
                    'SentimentConfiguration': {
                        'RuleName': 'string',
                        'SentimentType': 'NEGATIVE',
                        'TimePeriod': 123
                    },
                    'IssueDetectionConfiguration': {
                        'RuleName': 'string'
                    }
                },
            ]
        },
        'Elements': [
            {
                'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'LanguageModelName': 'string',
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'FilterPartialResults': True|False,
                    'PostCallAnalyticsSettings': {
                        'OutputLocation': 'string',
                        'DataAccessRoleArn': 'string',
                        'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                        'OutputEncryptionKMSKeyId': 'string'
                    },
                    'CallAnalyticsStreamCategories': [
                        'string',
                    ]
                },
                'AmazonTranscribeProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'ShowSpeakerLabel': True|False,
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'LanguageModelName': 'string',
                    'FilterPartialResults': True|False,
                    'IdentifyLanguage': True|False,
                    'LanguageOptions': 'string',
                    'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyNames': 'string',
                    'VocabularyFilterNames': 'string'
                },
                'KinesisDataStreamSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'S3RecordingSinkConfiguration': {
                    'Destination': 'string',
                    'RecordingFileFormat': 'Wav'|'Opus'
                },
                'VoiceAnalyticsProcessorConfiguration': {
                    'SpeakerSearchStatus': 'Enabled'|'Disabled',
                    'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
                },
                'LambdaFunctionSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SqsQueueSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SnsTopicSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'VoiceEnhancementSinkConfiguration': {
                    'Disabled': True|False
                }
            },
        ],
        'MediaInsightsPipelineConfigurationId': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • MediaInsightsPipelineConfiguration (dict) --

      The configuration settings for the media insights pipeline.

      • MediaInsightsPipelineConfigurationName (string) --

        The name of the configuration.

      • MediaInsightsPipelineConfigurationArn (string) --

        The ARN of the configuration.

      • ResourceAccessRoleArn (string) --

        The ARN of the role used by the service to access Amazon Web Services resources.

      • RealTimeAlertConfiguration (dict) --

        Lists the rules that trigger a real-time alert.

        • Disabled (boolean) --

          Turns off real-time alerts.

        • Rules (list) --

          The rules in the alert. Rules specify the words or phrases that you want to be notified about.

          • (dict) --

            Specifies the words or phrases that trigger an alert.

            • Type (string) --

              The type of alert rule.

            • KeywordMatchConfiguration (dict) --

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleName (string) --

                The name of the keyword match rule.

              • Keywords (list) --

                The keywords or phrases that you want to match.

                • (string) --

              • Negate (boolean) --

                Matches keywords or phrases on their presence or absence. If set to TRUE , the rule matches when all the specified keywords or phrases are absent. Default: FALSE .

            • SentimentConfiguration (dict) --

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleName (string) --

                The name of the rule in the sentiment configuration.

              • SentimentType (string) --

                The type of sentiment, POSITIVE , NEGATIVE , or NEUTRAL .

              • TimePeriod (integer) --

                Specifies the analysis interval.

            • IssueDetectionConfiguration (dict) --

              Specifies the issue detection settings for a real-time alert rule.

              • RuleName (string) --

                The name of the issue detection rule.

      • Elements (list) --

        The elements in the configuration.

        • (dict) --

          An element in a media insights pipeline configuration.

          • Type (string) --

            The element type.

          • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) --

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code in the configuration.

            • VocabularyName (string) --

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with *** , choose mask .

              To delete words, choose remove .

              To flag words without changing them, choose tag .

            • LanguageModelName (string) --

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults (boolean) --

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings (dict) --

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocation (string) --

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArn (string) --

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide .

              • ContentRedactionOutput (string) --

                The content redaction output settings for a post-call analysis task.

              • OutputEncryptionKMSKeyId (string) --

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories (list) --

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

              • (string) --

          • AmazonTranscribeProcessorConfiguration (dict) --

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide .

            • VocabularyName (string) --

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              The vocabulary filtering method used in your Call Analytics transcription.

            • ShowSpeakerLabel (boolean) --

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              If you leave this parameter empty, the default behavior is equivalent to ALL .

            • LanguageModelName (string) --

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • FilterPartialResults (boolean) --

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage (boolean) --

              Turns language identification on or off.

            • LanguageOptions (string) --

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage (string) --

              The preferred language for the transcription.

            • VocabularyNames (string) --

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames (string) --

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration (dict) --

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • S3RecordingSinkConfiguration (dict) --

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination (string) --

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat (string) --

              The default file format for the media files sent to the Amazon S3 bucket.

          • VoiceAnalyticsProcessorConfiguration (dict) --

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus (string) --

              The status of the speaker search task.

            • VoiceToneAnalysisStatus (string) --

              The status of the voice tone analysis task.

          • LambdaFunctionSinkConfiguration (dict) --

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • SqsQueueSinkConfiguration (dict) --

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration (dict) --

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration (dict) --

            The configuration settings for the VoiceEnhancementSinkConfiguration element.

            • Disabled (boolean) --

              Disables the VoiceEnhancementSinkConfiguration element.

      • MediaInsightsPipelineConfigurationId (string) --

        The ID of the configuration.

      • CreatedTimestamp (datetime) --

        The time at which the configuration was created.

      • UpdatedTimestamp (datetime) --

        The time at which the configuration was last updated.

CreateMediaLiveConnectorPipeline (updated) Link ¶
Changes (response)
{'MediaLiveConnectorPipeline': {'Status': {'NotStarted'}}}

Creates a media live connector pipeline in an Amazon Chime SDK meeting.

See also: AWS API Documentation

Request Syntax

client.create_media_live_connector_pipeline(
    Sources=[
        {
            'SourceType': 'ChimeSdkMeeting',
            'ChimeSdkMeetingLiveConnectorConfiguration': {
                'Arn': 'string',
                'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo',
                'CompositedVideo': {
                    'Layout': 'GridView',
                    'Resolution': 'HD'|'FHD',
                    'GridViewConfiguration': {
                        'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                        'PresenterOnlyConfiguration': {
                            'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'ActiveSpeakerOnlyConfiguration': {
                            'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'HorizontalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Top'|'Bottom',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VerticalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Left'|'Right',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VideoAttribute': {
                            'CornerRadius': 123,
                            'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'BorderThickness': 123
                        },
                        'CanvasOrientation': 'Landscape'|'Portrait'
                    }
                },
                'SourceConfiguration': {
                    'SelectedVideoStreams': {
                        'AttendeeIds': [
                            'string',
                        ],
                        'ExternalUserIds': [
                            'string',
                        ]
                    }
                }
            }
        },
    ],
    Sinks=[
        {
            'SinkType': 'RTMP',
            'RTMPConfiguration': {
                'Url': 'string',
                'AudioChannels': 'Stereo'|'Mono',
                'AudioSampleRate': 'string'
            }
        },
    ],
    ClientRequestToken='string',
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type Sources

list

param Sources

[REQUIRED]

The media live connector pipeline's data sources.

  • (dict) --

    The data source configuration object of a streaming media pipeline.

    • SourceType (string) -- [REQUIRED]

      The source configuration's media source type.

    • ChimeSdkMeetingLiveConnectorConfiguration (dict) -- [REQUIRED]

      The configuration settings of the connector pipeline.

      • Arn (string) -- [REQUIRED]

        The configuration object's Chime SDK meeting ARN.

      • MuxType (string) -- [REQUIRED]

        The configuration object's multiplex type.

      • CompositedVideo (dict) --

        The media pipeline's composited video.

        • Layout (string) --

          The layout setting, such as GridView in the configuration object.

        • Resolution (string) --

          The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

        • GridViewConfiguration (dict) -- [REQUIRED]

          The GridView configuration setting.

          • ContentShareLayout (string) -- [REQUIRED]

            Defines the layout of the video tiles when content sharing is enabled.

          • PresenterOnlyConfiguration (dict) --

            Defines the configuration options for a presenter only video tile.

            • PresenterPosition (string) --

              Defines the position of the presenter video tile. Default: TopRight .

          • ActiveSpeakerOnlyConfiguration (dict) --

            The configuration settings for an ActiveSpeakerOnly video tile.

            • ActiveSpeakerPosition (string) --

              The position of the ActiveSpeakerOnly video tile.

          • HorizontalLayoutConfiguration (dict) --

            The configuration settings for a horizontal layout.

            • TileOrder (string) --

              Sets the automatic ordering of the video tiles.

            • TilePosition (string) --

              Sets the position of horizontal tiles.

            • TileCount (integer) --

              The maximum number of video tiles to display.

            • TileAspectRatio (string) --

              Sets the aspect ratio of the video tiles, such as 16:9.

          • VerticalLayoutConfiguration (dict) --

            The configuration settings for a vertical layout.

            • TileOrder (string) --

              Sets the automatic ordering of the video tiles.

            • TilePosition (string) --

              Sets the position of vertical tiles.

            • TileCount (integer) --

              The maximum number of tiles to display.

            • TileAspectRatio (string) --

              Sets the aspect ratio of the video tiles, such as 16:9.

          • VideoAttribute (dict) --

            The attribute settings for the video tiles.

            • CornerRadius (integer) --

              Sets the corner radius of all video tiles.

            • BorderColor (string) --

              Defines the border color of all video tiles.

            • HighlightColor (string) --

              Defines the highlight color for the active video tile.

            • BorderThickness (integer) --

              Defines the border thickness for all video tiles.

          • CanvasOrientation (string) --

            The orientation setting, horizontal or vertical.

      • SourceConfiguration (dict) --

        The source configuration settings of the media pipeline's configuration object.

        • SelectedVideoStreams (dict) --

          The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

          • AttendeeIds (list) --

            The attendee IDs of the streams selected for a media pipeline.

            • (string) --

          • ExternalUserIds (list) --

            The external user IDs of the streams selected for a media pipeline.

            • (string) --

type Sinks

list

param Sinks

[REQUIRED]

The media live connector pipeline's data sinks.

  • (dict) --

    The media pipeline's sink configuration settings.

    • SinkType (string) -- [REQUIRED]

      The sink configuration's sink type.

    • RTMPConfiguration (dict) -- [REQUIRED]

      The sink configuration's RTMP configuration settings.

      • Url (string) -- [REQUIRED]

        The URL of the RTMP configuration.

      • AudioChannels (string) --

        The audio channels set for the RTMP configuration

      • AudioSampleRate (string) --

        The audio sample rate set for the RTMP configuration. Default: 48000.

type ClientRequestToken

string

param ClientRequestToken

The token assigned to the client making the request.

This field is autopopulated if not provided.

type Tags

list

param Tags

The tags associated with the media live connector pipeline.

  • (dict) --

    A key/value pair that grants users access to meeting resources.

    • Key (string) -- [REQUIRED]

      The key half of a tag.

    • Value (string) -- [REQUIRED]

      The value half of a tag.

rtype

dict

returns

Response Syntax

{
    'MediaLiveConnectorPipeline': {
        'Sources': [
            {
                'SourceType': 'ChimeSdkMeeting',
                'ChimeSdkMeetingLiveConnectorConfiguration': {
                    'Arn': 'string',
                    'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo',
                    'CompositedVideo': {
                        'Layout': 'GridView',
                        'Resolution': 'HD'|'FHD',
                        'GridViewConfiguration': {
                            'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                            'PresenterOnlyConfiguration': {
                                'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                            },
                            'ActiveSpeakerOnlyConfiguration': {
                                'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                            },
                            'HorizontalLayoutConfiguration': {
                                'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                'TilePosition': 'Top'|'Bottom',
                                'TileCount': 123,
                                'TileAspectRatio': 'string'
                            },
                            'VerticalLayoutConfiguration': {
                                'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                'TilePosition': 'Left'|'Right',
                                'TileCount': 123,
                                'TileAspectRatio': 'string'
                            },
                            'VideoAttribute': {
                                'CornerRadius': 123,
                                'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                'BorderThickness': 123
                            },
                            'CanvasOrientation': 'Landscape'|'Portrait'
                        }
                    },
                    'SourceConfiguration': {
                        'SelectedVideoStreams': {
                            'AttendeeIds': [
                                'string',
                            ],
                            'ExternalUserIds': [
                                'string',
                            ]
                        }
                    }
                }
            },
        ],
        'Sinks': [
            {
                'SinkType': 'RTMP',
                'RTMPConfiguration': {
                    'Url': 'string',
                    'AudioChannels': 'Stereo'|'Mono',
                    'AudioSampleRate': 'string'
                }
            },
        ],
        'MediaPipelineId': 'string',
        'MediaPipelineArn': 'string',
        'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • MediaLiveConnectorPipeline (dict) --

      The new media live connector pipeline.

      • Sources (list) --

        The connector pipeline's data sources.

        • (dict) --

          The data source configuration object of a streaming media pipeline.

          • SourceType (string) --

            The source configuration's media source type.

          • ChimeSdkMeetingLiveConnectorConfiguration (dict) --

            The configuration settings of the connector pipeline.

            • Arn (string) --

              The configuration object's Chime SDK meeting ARN.

            • MuxType (string) --

              The configuration object's multiplex type.

            • CompositedVideo (dict) --

              The media pipeline's composited video.

              • Layout (string) --

                The layout setting, such as GridView in the configuration object.

              • Resolution (string) --

                The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

              • GridViewConfiguration (dict) --

                The GridView configuration setting.

                • ContentShareLayout (string) --

                  Defines the layout of the video tiles when content sharing is enabled.

                • PresenterOnlyConfiguration (dict) --

                  Defines the configuration options for a presenter only video tile.

                  • PresenterPosition (string) --

                    Defines the position of the presenter video tile. Default: TopRight .

                • ActiveSpeakerOnlyConfiguration (dict) --

                  The configuration settings for an ActiveSpeakerOnly video tile.

                  • ActiveSpeakerPosition (string) --

                    The position of the ActiveSpeakerOnly video tile.

                • HorizontalLayoutConfiguration (dict) --

                  The configuration settings for a horizontal layout.

                  • TileOrder (string) --

                    Sets the automatic ordering of the video tiles.

                  • TilePosition (string) --

                    Sets the position of horizontal tiles.

                  • TileCount (integer) --

                    The maximum number of video tiles to display.

                  • TileAspectRatio (string) --

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VerticalLayoutConfiguration (dict) --

                  The configuration settings for a vertical layout.

                  • TileOrder (string) --

                    Sets the automatic ordering of the video tiles.

                  • TilePosition (string) --

                    Sets the position of vertical tiles.

                  • TileCount (integer) --

                    The maximum number of tiles to display.

                  • TileAspectRatio (string) --

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VideoAttribute (dict) --

                  The attribute settings for the video tiles.

                  • CornerRadius (integer) --

                    Sets the corner radius of all video tiles.

                  • BorderColor (string) --

                    Defines the border color of all video tiles.

                  • HighlightColor (string) --

                    Defines the highlight color for the active video tile.

                  • BorderThickness (integer) --

                    Defines the border thickness for all video tiles.

                • CanvasOrientation (string) --

                  The orientation setting, horizontal or vertical.

            • SourceConfiguration (dict) --

              The source configuration settings of the media pipeline's configuration object.

              • SelectedVideoStreams (dict) --

                The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

                • AttendeeIds (list) --

                  The attendee IDs of the streams selected for a media pipeline.

                  • (string) --

                • ExternalUserIds (list) --

                  The external user IDs of the streams selected for a media pipeline.

                  • (string) --

      • Sinks (list) --

        The connector pipeline's data sinks.

        • (dict) --

          The media pipeline's sink configuration settings.

          • SinkType (string) --

            The sink configuration's sink type.

          • RTMPConfiguration (dict) --

            The sink configuration's RTMP configuration settings.

            • Url (string) --

              The URL of the RTMP configuration.

            • AudioChannels (string) --

              The audio channels set for the RTMP configuration

            • AudioSampleRate (string) --

              The audio sample rate set for the RTMP configuration. Default: 48000.

      • MediaPipelineId (string) --

        The connector pipeline's ID.

      • MediaPipelineArn (string) --

        The connector pipeline's ARN.

      • Status (string) --

        The connector pipeline's status.

      • CreatedTimestamp (datetime) --

        The time at which the connector pipeline was created.

      • UpdatedTimestamp (datetime) --

        The time at which the connector pipeline was last updated.

GetMediaCapturePipeline (updated) Link ¶
Changes (response)
{'MediaCapturePipeline': {'Status': {'NotStarted'}}}

Gets an existing media pipeline.

See also: AWS API Documentation

Request Syntax

client.get_media_capture_pipeline(
    MediaPipelineId='string'
)
type MediaPipelineId

string

param MediaPipelineId

[REQUIRED]

The ID of the pipeline that you want to get.

rtype

dict

returns

Response Syntax

{
    'MediaCapturePipeline': {
        'MediaPipelineId': 'string',
        'MediaPipelineArn': 'string',
        'SourceType': 'ChimeSdkMeeting',
        'SourceArn': 'string',
        'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
        'SinkType': 'S3Bucket',
        'SinkArn': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1),
        'ChimeSdkMeetingConfiguration': {
            'SourceConfiguration': {
                'SelectedVideoStreams': {
                    'AttendeeIds': [
                        'string',
                    ],
                    'ExternalUserIds': [
                        'string',
                    ]
                }
            },
            'ArtifactsConfiguration': {
                'Audio': {
                    'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo'
                },
                'Video': {
                    'State': 'Enabled'|'Disabled',
                    'MuxType': 'VideoOnly'
                },
                'Content': {
                    'State': 'Enabled'|'Disabled',
                    'MuxType': 'ContentOnly'
                },
                'CompositedVideo': {
                    'Layout': 'GridView',
                    'Resolution': 'HD'|'FHD',
                    'GridViewConfiguration': {
                        'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                        'PresenterOnlyConfiguration': {
                            'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'ActiveSpeakerOnlyConfiguration': {
                            'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                        },
                        'HorizontalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Top'|'Bottom',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VerticalLayoutConfiguration': {
                            'TileOrder': 'JoinSequence'|'SpeakerSequence',
                            'TilePosition': 'Left'|'Right',
                            'TileCount': 123,
                            'TileAspectRatio': 'string'
                        },
                        'VideoAttribute': {
                            'CornerRadius': 123,
                            'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                            'BorderThickness': 123
                        },
                        'CanvasOrientation': 'Landscape'|'Portrait'
                    }
                }
            }
        }
    }
}

Response Structure

  • (dict) --

    • MediaCapturePipeline (dict) --

      The media pipeline object.

      • MediaPipelineId (string) --

        The ID of a media pipeline.

      • MediaPipelineArn (string) --

        The ARN of the media capture pipeline

      • SourceType (string) --

        Source type from which media artifacts are saved. You must use ChimeMeeting .

      • SourceArn (string) --

        ARN of the source from which the media artifacts are saved.

      • Status (string) --

        The status of the media pipeline.

      • SinkType (string) --

        Destination type to which the media artifacts are saved. You must use an S3 Bucket.

      • SinkArn (string) --

        ARN of the destination to which the media artifacts are saved.

      • CreatedTimestamp (datetime) --

        The time at which the pipeline was created, in ISO 8601 format.

      • UpdatedTimestamp (datetime) --

        The time at which the pipeline was updated, in ISO 8601 format.

      • ChimeSdkMeetingConfiguration (dict) --

        The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting .

        • SourceConfiguration (dict) --

          The source configuration for a specified media pipeline.

          • SelectedVideoStreams (dict) --

            The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

            • AttendeeIds (list) --

              The attendee IDs of the streams selected for a media pipeline.

              • (string) --

            • ExternalUserIds (list) --

              The external user IDs of the streams selected for a media pipeline.

              • (string) --

        • ArtifactsConfiguration (dict) --

          The configuration for the artifacts in an Amazon Chime SDK meeting.

          • Audio (dict) --

            The configuration for the audio artifacts.

            • MuxType (string) --

              The MUX type of the audio artifact configuration object.

          • Video (dict) --

            The configuration for the video artifacts.

            • State (string) --

              Indicates whether the video artifact is enabled or disabled.

            • MuxType (string) --

              The MUX type of the video artifact configuration object.

          • Content (dict) --

            The configuration for the content artifacts.

            • State (string) --

              Indicates whether the content artifact is enabled or disabled.

            • MuxType (string) --

              The MUX type of the artifact configuration.

          • CompositedVideo (dict) --

            Enables video compositing.

            • Layout (string) --

              The layout setting, such as GridView in the configuration object.

            • Resolution (string) --

              The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

            • GridViewConfiguration (dict) --

              The GridView configuration setting.

              • ContentShareLayout (string) --

                Defines the layout of the video tiles when content sharing is enabled.

              • PresenterOnlyConfiguration (dict) --

                Defines the configuration options for a presenter only video tile.

                • PresenterPosition (string) --

                  Defines the position of the presenter video tile. Default: TopRight .

              • ActiveSpeakerOnlyConfiguration (dict) --

                The configuration settings for an ActiveSpeakerOnly video tile.

                • ActiveSpeakerPosition (string) --

                  The position of the ActiveSpeakerOnly video tile.

              • HorizontalLayoutConfiguration (dict) --

                The configuration settings for a horizontal layout.

                • TileOrder (string) --

                  Sets the automatic ordering of the video tiles.

                • TilePosition (string) --

                  Sets the position of horizontal tiles.

                • TileCount (integer) --

                  The maximum number of video tiles to display.

                • TileAspectRatio (string) --

                  Sets the aspect ratio of the video tiles, such as 16:9.

              • VerticalLayoutConfiguration (dict) --

                The configuration settings for a vertical layout.

                • TileOrder (string) --

                  Sets the automatic ordering of the video tiles.

                • TilePosition (string) --

                  Sets the position of vertical tiles.

                • TileCount (integer) --

                  The maximum number of tiles to display.

                • TileAspectRatio (string) --

                  Sets the aspect ratio of the video tiles, such as 16:9.

              • VideoAttribute (dict) --

                The attribute settings for the video tiles.

                • CornerRadius (integer) --

                  Sets the corner radius of all video tiles.

                • BorderColor (string) --

                  Defines the border color of all video tiles.

                • HighlightColor (string) --

                  Defines the highlight color for the active video tile.

                • BorderThickness (integer) --

                  Defines the border thickness for all video tiles.

              • CanvasOrientation (string) --

                The orientation setting, horizontal or vertical.

GetMediaInsightsPipelineConfiguration (updated) Link ¶
Changes (response)
{'MediaInsightsPipelineConfiguration': {'Elements': {'Type': {'VoiceEnhancementSink'},
                                                     'VoiceEnhancementSinkConfiguration': {'Disabled': 'boolean'}}}}

Gets the configuration settings for a media insights pipeline.

See also: AWS API Documentation

Request Syntax

client.get_media_insights_pipeline_configuration(
    Identifier='string'
)
type Identifier

string

param Identifier

[REQUIRED]

The unique identifier of the requested resource. Valid values include the name and ARN of the media insights pipeline configuration.

rtype

dict

returns

Response Syntax

{
    'MediaInsightsPipelineConfiguration': {
        'MediaInsightsPipelineConfigurationName': 'string',
        'MediaInsightsPipelineConfigurationArn': 'string',
        'ResourceAccessRoleArn': 'string',
        'RealTimeAlertConfiguration': {
            'Disabled': True|False,
            'Rules': [
                {
                    'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                    'KeywordMatchConfiguration': {
                        'RuleName': 'string',
                        'Keywords': [
                            'string',
                        ],
                        'Negate': True|False
                    },
                    'SentimentConfiguration': {
                        'RuleName': 'string',
                        'SentimentType': 'NEGATIVE',
                        'TimePeriod': 123
                    },
                    'IssueDetectionConfiguration': {
                        'RuleName': 'string'
                    }
                },
            ]
        },
        'Elements': [
            {
                'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'LanguageModelName': 'string',
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'FilterPartialResults': True|False,
                    'PostCallAnalyticsSettings': {
                        'OutputLocation': 'string',
                        'DataAccessRoleArn': 'string',
                        'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                        'OutputEncryptionKMSKeyId': 'string'
                    },
                    'CallAnalyticsStreamCategories': [
                        'string',
                    ]
                },
                'AmazonTranscribeProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'ShowSpeakerLabel': True|False,
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'LanguageModelName': 'string',
                    'FilterPartialResults': True|False,
                    'IdentifyLanguage': True|False,
                    'LanguageOptions': 'string',
                    'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyNames': 'string',
                    'VocabularyFilterNames': 'string'
                },
                'KinesisDataStreamSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'S3RecordingSinkConfiguration': {
                    'Destination': 'string',
                    'RecordingFileFormat': 'Wav'|'Opus'
                },
                'VoiceAnalyticsProcessorConfiguration': {
                    'SpeakerSearchStatus': 'Enabled'|'Disabled',
                    'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
                },
                'LambdaFunctionSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SqsQueueSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SnsTopicSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'VoiceEnhancementSinkConfiguration': {
                    'Disabled': True|False
                }
            },
        ],
        'MediaInsightsPipelineConfigurationId': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • MediaInsightsPipelineConfiguration (dict) --

      The requested media insights pipeline configuration.

      • MediaInsightsPipelineConfigurationName (string) --

        The name of the configuration.

      • MediaInsightsPipelineConfigurationArn (string) --

        The ARN of the configuration.

      • ResourceAccessRoleArn (string) --

        The ARN of the role used by the service to access Amazon Web Services resources.

      • RealTimeAlertConfiguration (dict) --

        Lists the rules that trigger a real-time alert.

        • Disabled (boolean) --

          Turns off real-time alerts.

        • Rules (list) --

          The rules in the alert. Rules specify the words or phrases that you want to be notified about.

          • (dict) --

            Specifies the words or phrases that trigger an alert.

            • Type (string) --

              The type of alert rule.

            • KeywordMatchConfiguration (dict) --

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleName (string) --

                The name of the keyword match rule.

              • Keywords (list) --

                The keywords or phrases that you want to match.

                • (string) --

              • Negate (boolean) --

                Matches keywords or phrases on their presence or absence. If set to TRUE , the rule matches when all the specified keywords or phrases are absent. Default: FALSE .

            • SentimentConfiguration (dict) --

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleName (string) --

                The name of the rule in the sentiment configuration.

              • SentimentType (string) --

                The type of sentiment, POSITIVE , NEGATIVE , or NEUTRAL .

              • TimePeriod (integer) --

                Specifies the analysis interval.

            • IssueDetectionConfiguration (dict) --

              Specifies the issue detection settings for a real-time alert rule.

              • RuleName (string) --

                The name of the issue detection rule.

      • Elements (list) --

        The elements in the configuration.

        • (dict) --

          An element in a media insights pipeline configuration.

          • Type (string) --

            The element type.

          • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) --

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code in the configuration.

            • VocabularyName (string) --

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with *** , choose mask .

              To delete words, choose remove .

              To flag words without changing them, choose tag .

            • LanguageModelName (string) --

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults (boolean) --

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings (dict) --

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocation (string) --

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArn (string) --

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide .

              • ContentRedactionOutput (string) --

                The content redaction output settings for a post-call analysis task.

              • OutputEncryptionKMSKeyId (string) --

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories (list) --

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

              • (string) --

          • AmazonTranscribeProcessorConfiguration (dict) --

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide .

            • VocabularyName (string) --

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              The vocabulary filtering method used in your Call Analytics transcription.

            • ShowSpeakerLabel (boolean) --

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              If you leave this parameter empty, the default behavior is equivalent to ALL .

            • LanguageModelName (string) --

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • FilterPartialResults (boolean) --

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage (boolean) --

              Turns language identification on or off.

            • LanguageOptions (string) --

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage (string) --

              The preferred language for the transcription.

            • VocabularyNames (string) --

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames (string) --

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration (dict) --

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • S3RecordingSinkConfiguration (dict) --

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination (string) --

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat (string) --

              The default file format for the media files sent to the Amazon S3 bucket.

          • VoiceAnalyticsProcessorConfiguration (dict) --

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus (string) --

              The status of the speaker search task.

            • VoiceToneAnalysisStatus (string) --

              The status of the voice tone analysis task.

          • LambdaFunctionSinkConfiguration (dict) --

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • SqsQueueSinkConfiguration (dict) --

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration (dict) --

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration (dict) --

            The configuration settings for the VoiceEnhancementSinkConfiguration element.

            • Disabled (boolean) --

              Disables the VoiceEnhancementSinkConfiguration element.

      • MediaInsightsPipelineConfigurationId (string) --

        The ID of the configuration.

      • CreatedTimestamp (datetime) --

        The time at which the configuration was created.

      • UpdatedTimestamp (datetime) --

        The time at which the configuration was last updated.

GetMediaPipeline (updated) Link ¶
Changes (response)
{'MediaPipeline': {'MediaCapturePipeline': {'Status': {'NotStarted'}},
                   'MediaConcatenationPipeline': {'Status': {'NotStarted'}},
                   'MediaInsightsPipeline': {'ElementStatuses': [{'Status': 'NotStarted '
                                                                            '| '
                                                                            'NotSupported '
                                                                            '| '
                                                                            'Initializing '
                                                                            '| '
                                                                            'InProgress '
                                                                            '| '
                                                                            'Failed '
                                                                            '| '
                                                                            'Stopping '
                                                                            '| '
                                                                            'Stopped '
                                                                            '| '
                                                                            'Paused',
                                                                  'Type': 'AmazonTranscribeCallAnalyticsProcessor '
                                                                          '| '
                                                                          'VoiceAnalyticsProcessor '
                                                                          '| '
                                                                          'AmazonTranscribeProcessor '
                                                                          '| '
                                                                          'KinesisDataStreamSink '
                                                                          '| '
                                                                          'LambdaFunctionSink '
                                                                          '| '
                                                                          'SqsQueueSink '
                                                                          '| '
                                                                          'SnsTopicSink '
                                                                          '| '
                                                                          'S3RecordingSink '
                                                                          '| '
                                                                          'VoiceEnhancementSink'}],
                                             'Status': {'NotStarted'}},
                   'MediaLiveConnectorPipeline': {'Status': {'NotStarted'}}}}

Gets an existing media pipeline.

See also: AWS API Documentation

Request Syntax

client.get_media_pipeline(
    MediaPipelineId='string'
)
type MediaPipelineId

string

param MediaPipelineId

[REQUIRED]

The ID of the pipeline that you want to get.

rtype

dict

returns

Response Syntax

{
    'MediaPipeline': {
        'MediaCapturePipeline': {
            'MediaPipelineId': 'string',
            'MediaPipelineArn': 'string',
            'SourceType': 'ChimeSdkMeeting',
            'SourceArn': 'string',
            'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
            'SinkType': 'S3Bucket',
            'SinkArn': 'string',
            'CreatedTimestamp': datetime(2015, 1, 1),
            'UpdatedTimestamp': datetime(2015, 1, 1),
            'ChimeSdkMeetingConfiguration': {
                'SourceConfiguration': {
                    'SelectedVideoStreams': {
                        'AttendeeIds': [
                            'string',
                        ],
                        'ExternalUserIds': [
                            'string',
                        ]
                    }
                },
                'ArtifactsConfiguration': {
                    'Audio': {
                        'MuxType': 'AudioOnly'|'AudioWithActiveSpeakerVideo'|'AudioWithCompositedVideo'
                    },
                    'Video': {
                        'State': 'Enabled'|'Disabled',
                        'MuxType': 'VideoOnly'
                    },
                    'Content': {
                        'State': 'Enabled'|'Disabled',
                        'MuxType': 'ContentOnly'
                    },
                    'CompositedVideo': {
                        'Layout': 'GridView',
                        'Resolution': 'HD'|'FHD',
                        'GridViewConfiguration': {
                            'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                            'PresenterOnlyConfiguration': {
                                'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                            },
                            'ActiveSpeakerOnlyConfiguration': {
                                'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                            },
                            'HorizontalLayoutConfiguration': {
                                'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                'TilePosition': 'Top'|'Bottom',
                                'TileCount': 123,
                                'TileAspectRatio': 'string'
                            },
                            'VerticalLayoutConfiguration': {
                                'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                'TilePosition': 'Left'|'Right',
                                'TileCount': 123,
                                'TileAspectRatio': 'string'
                            },
                            'VideoAttribute': {
                                'CornerRadius': 123,
                                'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                'BorderThickness': 123
                            },
                            'CanvasOrientation': 'Landscape'|'Portrait'
                        }
                    }
                }
            }
        },
        'MediaLiveConnectorPipeline': {
            'Sources': [
                {
                    'SourceType': 'ChimeSdkMeeting',
                    'ChimeSdkMeetingLiveConnectorConfiguration': {
                        'Arn': 'string',
                        'MuxType': 'AudioWithCompositedVideo'|'AudioWithActiveSpeakerVideo',
                        'CompositedVideo': {
                            'Layout': 'GridView',
                            'Resolution': 'HD'|'FHD',
                            'GridViewConfiguration': {
                                'ContentShareLayout': 'PresenterOnly'|'Horizontal'|'Vertical'|'ActiveSpeakerOnly',
                                'PresenterOnlyConfiguration': {
                                    'PresenterPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                                },
                                'ActiveSpeakerOnlyConfiguration': {
                                    'ActiveSpeakerPosition': 'TopLeft'|'TopRight'|'BottomLeft'|'BottomRight'
                                },
                                'HorizontalLayoutConfiguration': {
                                    'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                    'TilePosition': 'Top'|'Bottom',
                                    'TileCount': 123,
                                    'TileAspectRatio': 'string'
                                },
                                'VerticalLayoutConfiguration': {
                                    'TileOrder': 'JoinSequence'|'SpeakerSequence',
                                    'TilePosition': 'Left'|'Right',
                                    'TileCount': 123,
                                    'TileAspectRatio': 'string'
                                },
                                'VideoAttribute': {
                                    'CornerRadius': 123,
                                    'BorderColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                    'HighlightColor': 'Black'|'Blue'|'Red'|'Green'|'White'|'Yellow',
                                    'BorderThickness': 123
                                },
                                'CanvasOrientation': 'Landscape'|'Portrait'
                            }
                        },
                        'SourceConfiguration': {
                            'SelectedVideoStreams': {
                                'AttendeeIds': [
                                    'string',
                                ],
                                'ExternalUserIds': [
                                    'string',
                                ]
                            }
                        }
                    }
                },
            ],
            'Sinks': [
                {
                    'SinkType': 'RTMP',
                    'RTMPConfiguration': {
                        'Url': 'string',
                        'AudioChannels': 'Stereo'|'Mono',
                        'AudioSampleRate': 'string'
                    }
                },
            ],
            'MediaPipelineId': 'string',
            'MediaPipelineArn': 'string',
            'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
            'CreatedTimestamp': datetime(2015, 1, 1),
            'UpdatedTimestamp': datetime(2015, 1, 1)
        },
        'MediaConcatenationPipeline': {
            'MediaPipelineId': 'string',
            'MediaPipelineArn': 'string',
            'Sources': [
                {
                    'Type': 'MediaCapturePipeline',
                    'MediaCapturePipelineSourceConfiguration': {
                        'MediaPipelineArn': 'string',
                        'ChimeSdkMeetingConfiguration': {
                            'ArtifactsConfiguration': {
                                'Audio': {
                                    'State': 'Enabled'
                                },
                                'Video': {
                                    'State': 'Enabled'|'Disabled'
                                },
                                'Content': {
                                    'State': 'Enabled'|'Disabled'
                                },
                                'DataChannel': {
                                    'State': 'Enabled'|'Disabled'
                                },
                                'TranscriptionMessages': {
                                    'State': 'Enabled'|'Disabled'
                                },
                                'MeetingEvents': {
                                    'State': 'Enabled'|'Disabled'
                                },
                                'CompositedVideo': {
                                    'State': 'Enabled'|'Disabled'
                                }
                            }
                        }
                    }
                },
            ],
            'Sinks': [
                {
                    'Type': 'S3Bucket',
                    'S3BucketSinkConfiguration': {
                        'Destination': 'string'
                    }
                },
            ],
            'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
            'CreatedTimestamp': datetime(2015, 1, 1),
            'UpdatedTimestamp': datetime(2015, 1, 1)
        },
        'MediaInsightsPipeline': {
            'MediaPipelineId': 'string',
            'MediaPipelineArn': 'string',
            'MediaInsightsPipelineConfigurationArn': 'string',
            'Status': 'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'|'NotStarted',
            'KinesisVideoStreamSourceRuntimeConfiguration': {
                'Streams': [
                    {
                        'StreamArn': 'string',
                        'FragmentNumber': 'string',
                        'StreamChannelDefinition': {
                            'NumberOfChannels': 123,
                            'ChannelDefinitions': [
                                {
                                    'ChannelId': 123,
                                    'ParticipantRole': 'AGENT'|'CUSTOMER'
                                },
                            ]
                        }
                    },
                ],
                'MediaEncoding': 'pcm',
                'MediaSampleRate': 123
            },
            'MediaInsightsRuntimeMetadata': {
                'string': 'string'
            },
            'KinesisVideoStreamRecordingSourceRuntimeConfiguration': {
                'Streams': [
                    {
                        'StreamArn': 'string'
                    },
                ],
                'FragmentSelector': {
                    'FragmentSelectorType': 'ProducerTimestamp'|'ServerTimestamp',
                    'TimestampRange': {
                        'StartTimestamp': datetime(2015, 1, 1),
                        'EndTimestamp': datetime(2015, 1, 1)
                    }
                }
            },
            'S3RecordingSinkRuntimeConfiguration': {
                'Destination': 'string',
                'RecordingFileFormat': 'Wav'|'Opus'
            },
            'CreatedTimestamp': datetime(2015, 1, 1),
            'ElementStatuses': [
                {
                    'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                    'Status': 'NotStarted'|'NotSupported'|'Initializing'|'InProgress'|'Failed'|'Stopping'|'Stopped'|'Paused'
                },
            ]
        }
    }
}

Response Structure

  • (dict) --

    • MediaPipeline (dict) --

      The media pipeline object.

      • MediaCapturePipeline (dict) --

        A pipeline that enables users to capture audio and video.

        • MediaPipelineId (string) --

          The ID of a media pipeline.

        • MediaPipelineArn (string) --

          The ARN of the media capture pipeline

        • SourceType (string) --

          Source type from which media artifacts are saved. You must use ChimeMeeting .

        • SourceArn (string) --

          ARN of the source from which the media artifacts are saved.

        • Status (string) --

          The status of the media pipeline.

        • SinkType (string) --

          Destination type to which the media artifacts are saved. You must use an S3 Bucket.

        • SinkArn (string) --

          ARN of the destination to which the media artifacts are saved.

        • CreatedTimestamp (datetime) --

          The time at which the pipeline was created, in ISO 8601 format.

        • UpdatedTimestamp (datetime) --

          The time at which the pipeline was updated, in ISO 8601 format.

        • ChimeSdkMeetingConfiguration (dict) --

          The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting .

          • SourceConfiguration (dict) --

            The source configuration for a specified media pipeline.

            • SelectedVideoStreams (dict) --

              The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

              • AttendeeIds (list) --

                The attendee IDs of the streams selected for a media pipeline.

                • (string) --

              • ExternalUserIds (list) --

                The external user IDs of the streams selected for a media pipeline.

                • (string) --

          • ArtifactsConfiguration (dict) --

            The configuration for the artifacts in an Amazon Chime SDK meeting.

            • Audio (dict) --

              The configuration for the audio artifacts.

              • MuxType (string) --

                The MUX type of the audio artifact configuration object.

            • Video (dict) --

              The configuration for the video artifacts.

              • State (string) --

                Indicates whether the video artifact is enabled or disabled.

              • MuxType (string) --

                The MUX type of the video artifact configuration object.

            • Content (dict) --

              The configuration for the content artifacts.

              • State (string) --

                Indicates whether the content artifact is enabled or disabled.

              • MuxType (string) --

                The MUX type of the artifact configuration.

            • CompositedVideo (dict) --

              Enables video compositing.

              • Layout (string) --

                The layout setting, such as GridView in the configuration object.

              • Resolution (string) --

                The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

              • GridViewConfiguration (dict) --

                The GridView configuration setting.

                • ContentShareLayout (string) --

                  Defines the layout of the video tiles when content sharing is enabled.

                • PresenterOnlyConfiguration (dict) --

                  Defines the configuration options for a presenter only video tile.

                  • PresenterPosition (string) --

                    Defines the position of the presenter video tile. Default: TopRight .

                • ActiveSpeakerOnlyConfiguration (dict) --

                  The configuration settings for an ActiveSpeakerOnly video tile.

                  • ActiveSpeakerPosition (string) --

                    The position of the ActiveSpeakerOnly video tile.

                • HorizontalLayoutConfiguration (dict) --

                  The configuration settings for a horizontal layout.

                  • TileOrder (string) --

                    Sets the automatic ordering of the video tiles.

                  • TilePosition (string) --

                    Sets the position of horizontal tiles.

                  • TileCount (integer) --

                    The maximum number of video tiles to display.

                  • TileAspectRatio (string) --

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VerticalLayoutConfiguration (dict) --

                  The configuration settings for a vertical layout.

                  • TileOrder (string) --

                    Sets the automatic ordering of the video tiles.

                  • TilePosition (string) --

                    Sets the position of vertical tiles.

                  • TileCount (integer) --

                    The maximum number of tiles to display.

                  • TileAspectRatio (string) --

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VideoAttribute (dict) --

                  The attribute settings for the video tiles.

                  • CornerRadius (integer) --

                    Sets the corner radius of all video tiles.

                  • BorderColor (string) --

                    Defines the border color of all video tiles.

                  • HighlightColor (string) --

                    Defines the highlight color for the active video tile.

                  • BorderThickness (integer) --

                    Defines the border thickness for all video tiles.

                • CanvasOrientation (string) --

                  The orientation setting, horizontal or vertical.

      • MediaLiveConnectorPipeline (dict) --

        The connector pipeline of the media pipeline.

        • Sources (list) --

          The connector pipeline's data sources.

          • (dict) --

            The data source configuration object of a streaming media pipeline.

            • SourceType (string) --

              The source configuration's media source type.

            • ChimeSdkMeetingLiveConnectorConfiguration (dict) --

              The configuration settings of the connector pipeline.

              • Arn (string) --

                The configuration object's Chime SDK meeting ARN.

              • MuxType (string) --

                The configuration object's multiplex type.

              • CompositedVideo (dict) --

                The media pipeline's composited video.

                • Layout (string) --

                  The layout setting, such as GridView in the configuration object.

                • Resolution (string) --

                  The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                • GridViewConfiguration (dict) --

                  The GridView configuration setting.

                  • ContentShareLayout (string) --

                    Defines the layout of the video tiles when content sharing is enabled.

                  • PresenterOnlyConfiguration (dict) --

                    Defines the configuration options for a presenter only video tile.

                    • PresenterPosition (string) --

                      Defines the position of the presenter video tile. Default: TopRight .

                  • ActiveSpeakerOnlyConfiguration (dict) --

                    The configuration settings for an ActiveSpeakerOnly video tile.

                    • ActiveSpeakerPosition (string) --

                      The position of the ActiveSpeakerOnly video tile.

                  • HorizontalLayoutConfiguration (dict) --

                    The configuration settings for a horizontal layout.

                    • TileOrder (string) --

                      Sets the automatic ordering of the video tiles.

                    • TilePosition (string) --

                      Sets the position of horizontal tiles.

                    • TileCount (integer) --

                      The maximum number of video tiles to display.

                    • TileAspectRatio (string) --

                      Sets the aspect ratio of the video tiles, such as 16:9.

                  • VerticalLayoutConfiguration (dict) --

                    The configuration settings for a vertical layout.

                    • TileOrder (string) --

                      Sets the automatic ordering of the video tiles.

                    • TilePosition (string) --

                      Sets the position of vertical tiles.

                    • TileCount (integer) --

                      The maximum number of tiles to display.

                    • TileAspectRatio (string) --

                      Sets the aspect ratio of the video tiles, such as 16:9.

                  • VideoAttribute (dict) --

                    The attribute settings for the video tiles.

                    • CornerRadius (integer) --

                      Sets the corner radius of all video tiles.

                    • BorderColor (string) --

                      Defines the border color of all video tiles.

                    • HighlightColor (string) --

                      Defines the highlight color for the active video tile.

                    • BorderThickness (integer) --

                      Defines the border thickness for all video tiles.

                  • CanvasOrientation (string) --

                    The orientation setting, horizontal or vertical.

              • SourceConfiguration (dict) --

                The source configuration settings of the media pipeline's configuration object.

                • SelectedVideoStreams (dict) --

                  The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

                  • AttendeeIds (list) --

                    The attendee IDs of the streams selected for a media pipeline.

                    • (string) --

                  • ExternalUserIds (list) --

                    The external user IDs of the streams selected for a media pipeline.

                    • (string) --

        • Sinks (list) --

          The connector pipeline's data sinks.

          • (dict) --

            The media pipeline's sink configuration settings.

            • SinkType (string) --

              The sink configuration's sink type.

            • RTMPConfiguration (dict) --

              The sink configuration's RTMP configuration settings.

              • Url (string) --

                The URL of the RTMP configuration.

              • AudioChannels (string) --

                The audio channels set for the RTMP configuration

              • AudioSampleRate (string) --

                The audio sample rate set for the RTMP configuration. Default: 48000.

        • MediaPipelineId (string) --

          The connector pipeline's ID.

        • MediaPipelineArn (string) --

          The connector pipeline's ARN.

        • Status (string) --

          The connector pipeline's status.

        • CreatedTimestamp (datetime) --

          The time at which the connector pipeline was created.

        • UpdatedTimestamp (datetime) --

          The time at which the connector pipeline was last updated.

      • MediaConcatenationPipeline (dict) --

        The media concatenation pipeline in a media pipeline.

        • MediaPipelineId (string) --

          The ID of the media pipeline being concatenated.

        • MediaPipelineArn (string) --

          The ARN of the media pipeline that you specify in the SourceConfiguration object.

        • Sources (list) --

          The data sources being concatenated.

          • (dict) --

            The source type and media pipeline configuration settings in a configuration object.

            • Type (string) --

              The type of concatenation source in a configuration object.

            • MediaCapturePipelineSourceConfiguration (dict) --

              The concatenation settings for the media pipeline in a configuration object.

              • MediaPipelineArn (string) --

                The media pipeline ARN in the configuration object of a media capture pipeline.

              • ChimeSdkMeetingConfiguration (dict) --

                The meeting configuration settings in a media capture pipeline configuration object.

                • ArtifactsConfiguration (dict) --

                  The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

                  • Audio (dict) --

                    The configuration for the audio artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • Video (dict) --

                    The configuration for the video artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • Content (dict) --

                    The configuration for the content artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • DataChannel (dict) --

                    The configuration for the data channel artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • TranscriptionMessages (dict) --

                    The configuration for the transcription messages artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • MeetingEvents (dict) --

                    The configuration for the meeting events artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

                  • CompositedVideo (dict) --

                    The configuration for the composited video artifacts concatenation.

                    • State (string) --

                      Enables or disables the configuration object.

        • Sinks (list) --

          The data sinks of the concatenation pipeline.

          • (dict) --

            The data sink of the configuration object.

            • Type (string) --

              The type of data sink in the configuration object.

            • S3BucketSinkConfiguration (dict) --

              The configuration settings for an Amazon S3 bucket sink.

              • Destination (string) --

                The destination URL of the S3 bucket.

        • Status (string) --

          The status of the concatenation pipeline.

        • CreatedTimestamp (datetime) --

          The time at which the concatenation pipeline was created.

        • UpdatedTimestamp (datetime) --

          The time at which the concatenation pipeline was last updated.

      • MediaInsightsPipeline (dict) --

        The media insights pipeline of a media pipeline.

        • MediaPipelineId (string) --

          The ID of a media insights pipeline.

        • MediaPipelineArn (string) --

          The ARN of a media insights pipeline.

        • MediaInsightsPipelineConfigurationArn (string) --

          The ARN of a media insight pipeline's configuration settings.

        • Status (string) --

          The status of a media insights pipeline.

        • KinesisVideoStreamSourceRuntimeConfiguration (dict) --

          The configuration settings for a Kinesis runtime video stream in a media insights pipeline.

          • Streams (list) --

            The streams in the source runtime configuration of a Kinesis video stream.

            • (dict) --

              The configuration settings for a stream.

              • StreamArn (string) --

                The ARN of the stream.

              • FragmentNumber (string) --

                The unique identifier of the fragment to begin processing.

              • StreamChannelDefinition (dict) --

                The streaming channel definition in the stream configuration.

                • NumberOfChannels (integer) --

                  The number of channels in a streaming channel.

                • ChannelDefinitions (list) --

                  The definitions of the channels in a streaming channel.

                  • (dict) --

                    Defines an audio channel in a Kinesis video stream.

                    • ChannelId (integer) --

                      The channel ID.

                    • ParticipantRole (string) --

                      Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER .

          • MediaEncoding (string) --

            Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

            For more information, see Media formats in the Amazon Transcribe Developer Guide .

          • MediaSampleRate (integer) --

            The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

            Valid Range: Minimum value of 8000. Maximum value of 48000.

        • MediaInsightsRuntimeMetadata (dict) --

          The runtime metadata of a media insights pipeline.

          • (string) --

            • (string) --

        • KinesisVideoStreamRecordingSourceRuntimeConfiguration (dict) --

          The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline.

          • Streams (list) --

            The stream or streams to be recorded.

            • (dict) --

              A structure that holds the settings for recording media.

              • StreamArn (string) --

                The ARN of the recording stream.

          • FragmentSelector (dict) --

            Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

            • FragmentSelectorType (string) --

              The origin of the timestamps to use, Server or Producer . For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide .

            • TimestampRange (dict) --

              The range of timestamps to return.

              • StartTimestamp (datetime) --

                The starting timestamp for the specified range.

              • EndTimestamp (datetime) --

                The ending timestamp for the specified range.

        • S3RecordingSinkRuntimeConfiguration (dict) --

          The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline.

          • Destination (string) --

            The URI of the S3 bucket used as the sink.

          • RecordingFileFormat (string) --

            The file format for the media files sent to the Amazon S3 bucket.

        • CreatedTimestamp (datetime) --

          The time at which the media insights pipeline was created.

        • ElementStatuses (list) --

          The statuses that the elements in a media insights pipeline can have during data processing.

          • (dict) --

            The status of the pipeline element.

            • Type (string) --

              The type of status.

            • Status (string) --

              The element's status.

UpdateMediaInsightsPipelineConfiguration (updated) Link ¶
Changes (request, response)
Request
{'Elements': {'Type': {'VoiceEnhancementSink'},
              'VoiceEnhancementSinkConfiguration': {'Disabled': 'boolean'}}}
Response
{'MediaInsightsPipelineConfiguration': {'Elements': {'Type': {'VoiceEnhancementSink'},
                                                     'VoiceEnhancementSinkConfiguration': {'Disabled': 'boolean'}}}}

Updates the media insights pipeline's configuration settings.

See also: AWS API Documentation

Request Syntax

client.update_media_insights_pipeline_configuration(
    Identifier='string',
    ResourceAccessRoleArn='string',
    RealTimeAlertConfiguration={
        'Disabled': True|False,
        'Rules': [
            {
                'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                'KeywordMatchConfiguration': {
                    'RuleName': 'string',
                    'Keywords': [
                        'string',
                    ],
                    'Negate': True|False
                },
                'SentimentConfiguration': {
                    'RuleName': 'string',
                    'SentimentType': 'NEGATIVE',
                    'TimePeriod': 123
                },
                'IssueDetectionConfiguration': {
                    'RuleName': 'string'
                }
            },
        ]
    },
    Elements=[
        {
            'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
            'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'LanguageModelName': 'string',
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'FilterPartialResults': True|False,
                'PostCallAnalyticsSettings': {
                    'OutputLocation': 'string',
                    'DataAccessRoleArn': 'string',
                    'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                    'OutputEncryptionKMSKeyId': 'string'
                },
                'CallAnalyticsStreamCategories': [
                    'string',
                ]
            },
            'AmazonTranscribeProcessorConfiguration': {
                'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyName': 'string',
                'VocabularyFilterName': 'string',
                'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                'ShowSpeakerLabel': True|False,
                'EnablePartialResultsStabilization': True|False,
                'PartialResultsStability': 'high'|'medium'|'low',
                'ContentIdentificationType': 'PII',
                'ContentRedactionType': 'PII',
                'PiiEntityTypes': 'string',
                'LanguageModelName': 'string',
                'FilterPartialResults': True|False,
                'IdentifyLanguage': True|False,
                'LanguageOptions': 'string',
                'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                'VocabularyNames': 'string',
                'VocabularyFilterNames': 'string'
            },
            'KinesisDataStreamSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'S3RecordingSinkConfiguration': {
                'Destination': 'string',
                'RecordingFileFormat': 'Wav'|'Opus'
            },
            'VoiceAnalyticsProcessorConfiguration': {
                'SpeakerSearchStatus': 'Enabled'|'Disabled',
                'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
            },
            'LambdaFunctionSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SqsQueueSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'SnsTopicSinkConfiguration': {
                'InsightsTarget': 'string'
            },
            'VoiceEnhancementSinkConfiguration': {
                'Disabled': True|False
            }
        },
    ]
)
type Identifier

string

param Identifier

[REQUIRED]

The unique identifier for the resource to be updated. Valid values include the name and ARN of the media insights pipeline configuration.

type ResourceAccessRoleArn

string

param ResourceAccessRoleArn

[REQUIRED]

The ARN of the role used by the service to access Amazon Web Services resources.

type RealTimeAlertConfiguration

dict

param RealTimeAlertConfiguration

The configuration settings for real-time alerts for the media insights pipeline.

  • Disabled (boolean) --

    Turns off real-time alerts.

  • Rules (list) --

    The rules in the alert. Rules specify the words or phrases that you want to be notified about.

    • (dict) --

      Specifies the words or phrases that trigger an alert.

      • Type (string) -- [REQUIRED]

        The type of alert rule.

      • KeywordMatchConfiguration (dict) --

        Specifies the settings for matching the keywords in a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the keyword match rule.

        • Keywords (list) -- [REQUIRED]

          The keywords or phrases that you want to match.

          • (string) --

        • Negate (boolean) --

          Matches keywords or phrases on their presence or absence. If set to TRUE , the rule matches when all the specified keywords or phrases are absent. Default: FALSE .

      • SentimentConfiguration (dict) --

        Specifies the settings for predicting sentiment in a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the rule in the sentiment configuration.

        • SentimentType (string) -- [REQUIRED]

          The type of sentiment, POSITIVE , NEGATIVE , or NEUTRAL .

        • TimePeriod (integer) -- [REQUIRED]

          Specifies the analysis interval.

      • IssueDetectionConfiguration (dict) --

        Specifies the issue detection settings for a real-time alert rule.

        • RuleName (string) -- [REQUIRED]

          The name of the issue detection rule.

type Elements

list

param Elements

[REQUIRED]

The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream..

  • (dict) --

    An element in a media insights pipeline configuration.

    • Type (string) -- [REQUIRED]

      The element type.

    • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) --

      The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

      • LanguageCode (string) -- [REQUIRED]

        The language code in the configuration.

      • VocabularyName (string) --

        Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

        If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

        For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide .

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterName (string) --

        Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

        If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

        For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide .

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterMethod (string) --

        Specifies how to apply a vocabulary filter to a transcript.

        To replace words with *** , choose mask .

        To delete words, choose remove .

        To flag words without changing them, choose tag .

      • LanguageModelName (string) --

        Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide .

      • EnablePartialResultsStabilization (boolean) --

        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • PartialResultsStability (string) --

        Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • ContentIdentificationType (string) --

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • ContentRedactionType (string) --

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • PiiEntityTypes (string) --

        Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

        Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

        Length Constraints: Minimum length of 1. Maximum length of 300.

      • FilterPartialResults (boolean) --

        If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

      • PostCallAnalyticsSettings (dict) --

        The settings for a post-call analysis task in an analytics configuration.

        • OutputLocation (string) -- [REQUIRED]

          The URL of the Amazon S3 bucket that contains the post-call data.

        • DataAccessRoleArn (string) -- [REQUIRED]

          The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide .

        • ContentRedactionOutput (string) --

          The content redaction output settings for a post-call analysis task.

        • OutputEncryptionKMSKeyId (string) --

          The ID of the KMS (Key Management Service) key used to encrypt the output.

      • CallAnalyticsStreamCategories (list) --

        By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

        • (string) --

    • AmazonTranscribeProcessorConfiguration (dict) --

      The transcription processor configuration settings in a media insights pipeline configuration element.

      • LanguageCode (string) --

        The language code that represents the language spoken in your audio.

        If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

        For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide .

      • VocabularyName (string) --

        The name of the custom vocabulary that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterName (string) --

        The name of the custom vocabulary filter that you specified in your Call Analytics request.

        Length Constraints: Minimum length of 1. Maximum length of 200.

      • VocabularyFilterMethod (string) --

        The vocabulary filtering method used in your Call Analytics transcription.

      • ShowSpeakerLabel (boolean) --

        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide .

      • EnablePartialResultsStabilization (boolean) --

        Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • PartialResultsStability (string) --

        The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

        Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

        For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

      • ContentIdentificationType (string) --

        Labels all personally identifiable information (PII) identified in your transcript.

        Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

        You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • ContentRedactionType (string) --

        Redacts all personally identifiable information (PII) identified in your transcript.

        Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

        You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException .

        For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

      • PiiEntityTypes (string) --

        The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

        To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

        Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

        If you leave this parameter empty, the default behavior is equivalent to ALL .

      • LanguageModelName (string) --

        The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

        The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

        For more information, see Custom language models in the Amazon Transcribe Developer Guide .

      • FilterPartialResults (boolean) --

        If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

      • IdentifyLanguage (boolean) --

        Turns language identification on or off.

      • LanguageOptions (string) --

        The language options for the transcription, such as automatic language detection.

      • PreferredLanguage (string) --

        The preferred language for the transcription.

      • VocabularyNames (string) --

        The names of the custom vocabulary or vocabularies used during transcription.

      • VocabularyFilterNames (string) --

        The names of the custom vocabulary filter or filters using during transcription.

    • KinesisDataStreamSinkConfiguration (dict) --

      The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the sink.

    • S3RecordingSinkConfiguration (dict) --

      The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

      • Destination (string) --

        The default URI of the Amazon S3 bucket used as the recording sink.

      • RecordingFileFormat (string) --

        The default file format for the media files sent to the Amazon S3 bucket.

    • VoiceAnalyticsProcessorConfiguration (dict) --

      The voice analytics configuration settings in a media insights pipeline configuration element.

      • SpeakerSearchStatus (string) --

        The status of the speaker search task.

      • VoiceToneAnalysisStatus (string) --

        The status of the voice tone analysis task.

    • LambdaFunctionSinkConfiguration (dict) --

      The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the sink.

    • SqsQueueSinkConfiguration (dict) --

      The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the SQS sink.

    • SnsTopicSinkConfiguration (dict) --

      The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

      • InsightsTarget (string) --

        The ARN of the SNS sink.

    • VoiceEnhancementSinkConfiguration (dict) --

      The configuration settings for the VoiceEnhancementSinkConfiguration element.

      • Disabled (boolean) --

        Disables the VoiceEnhancementSinkConfiguration element.

rtype

dict

returns

Response Syntax

{
    'MediaInsightsPipelineConfiguration': {
        'MediaInsightsPipelineConfigurationName': 'string',
        'MediaInsightsPipelineConfigurationArn': 'string',
        'ResourceAccessRoleArn': 'string',
        'RealTimeAlertConfiguration': {
            'Disabled': True|False,
            'Rules': [
                {
                    'Type': 'KeywordMatch'|'Sentiment'|'IssueDetection',
                    'KeywordMatchConfiguration': {
                        'RuleName': 'string',
                        'Keywords': [
                            'string',
                        ],
                        'Negate': True|False
                    },
                    'SentimentConfiguration': {
                        'RuleName': 'string',
                        'SentimentType': 'NEGATIVE',
                        'TimePeriod': 123
                    },
                    'IssueDetectionConfiguration': {
                        'RuleName': 'string'
                    }
                },
            ]
        },
        'Elements': [
            {
                'Type': 'AmazonTranscribeCallAnalyticsProcessor'|'VoiceAnalyticsProcessor'|'AmazonTranscribeProcessor'|'KinesisDataStreamSink'|'LambdaFunctionSink'|'SqsQueueSink'|'SnsTopicSink'|'S3RecordingSink'|'VoiceEnhancementSink',
                'AmazonTranscribeCallAnalyticsProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'LanguageModelName': 'string',
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'FilterPartialResults': True|False,
                    'PostCallAnalyticsSettings': {
                        'OutputLocation': 'string',
                        'DataAccessRoleArn': 'string',
                        'ContentRedactionOutput': 'redacted'|'redacted_and_unredacted',
                        'OutputEncryptionKMSKeyId': 'string'
                    },
                    'CallAnalyticsStreamCategories': [
                        'string',
                    ]
                },
                'AmazonTranscribeProcessorConfiguration': {
                    'LanguageCode': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyName': 'string',
                    'VocabularyFilterName': 'string',
                    'VocabularyFilterMethod': 'remove'|'mask'|'tag',
                    'ShowSpeakerLabel': True|False,
                    'EnablePartialResultsStabilization': True|False,
                    'PartialResultsStability': 'high'|'medium'|'low',
                    'ContentIdentificationType': 'PII',
                    'ContentRedactionType': 'PII',
                    'PiiEntityTypes': 'string',
                    'LanguageModelName': 'string',
                    'FilterPartialResults': True|False,
                    'IdentifyLanguage': True|False,
                    'LanguageOptions': 'string',
                    'PreferredLanguage': 'en-US'|'en-GB'|'es-US'|'fr-CA'|'fr-FR'|'en-AU'|'it-IT'|'de-DE'|'pt-BR',
                    'VocabularyNames': 'string',
                    'VocabularyFilterNames': 'string'
                },
                'KinesisDataStreamSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'S3RecordingSinkConfiguration': {
                    'Destination': 'string',
                    'RecordingFileFormat': 'Wav'|'Opus'
                },
                'VoiceAnalyticsProcessorConfiguration': {
                    'SpeakerSearchStatus': 'Enabled'|'Disabled',
                    'VoiceToneAnalysisStatus': 'Enabled'|'Disabled'
                },
                'LambdaFunctionSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SqsQueueSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'SnsTopicSinkConfiguration': {
                    'InsightsTarget': 'string'
                },
                'VoiceEnhancementSinkConfiguration': {
                    'Disabled': True|False
                }
            },
        ],
        'MediaInsightsPipelineConfigurationId': 'string',
        'CreatedTimestamp': datetime(2015, 1, 1),
        'UpdatedTimestamp': datetime(2015, 1, 1)
    }
}

Response Structure

  • (dict) --

    • MediaInsightsPipelineConfiguration (dict) --

      The updated configuration settings.

      • MediaInsightsPipelineConfigurationName (string) --

        The name of the configuration.

      • MediaInsightsPipelineConfigurationArn (string) --

        The ARN of the configuration.

      • ResourceAccessRoleArn (string) --

        The ARN of the role used by the service to access Amazon Web Services resources.

      • RealTimeAlertConfiguration (dict) --

        Lists the rules that trigger a real-time alert.

        • Disabled (boolean) --

          Turns off real-time alerts.

        • Rules (list) --

          The rules in the alert. Rules specify the words or phrases that you want to be notified about.

          • (dict) --

            Specifies the words or phrases that trigger an alert.

            • Type (string) --

              The type of alert rule.

            • KeywordMatchConfiguration (dict) --

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleName (string) --

                The name of the keyword match rule.

              • Keywords (list) --

                The keywords or phrases that you want to match.

                • (string) --

              • Negate (boolean) --

                Matches keywords or phrases on their presence or absence. If set to TRUE , the rule matches when all the specified keywords or phrases are absent. Default: FALSE .

            • SentimentConfiguration (dict) --

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleName (string) --

                The name of the rule in the sentiment configuration.

              • SentimentType (string) --

                The type of sentiment, POSITIVE , NEGATIVE , or NEUTRAL .

              • TimePeriod (integer) --

                Specifies the analysis interval.

            • IssueDetectionConfiguration (dict) --

              Specifies the issue detection settings for a real-time alert rule.

              • RuleName (string) --

                The name of the issue detection rule.

      • Elements (list) --

        The elements in the configuration.

        • (dict) --

          An element in a media insights pipeline configuration.

          • Type (string) --

            The element type.

          • AmazonTranscribeCallAnalyticsProcessorConfiguration (dict) --

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code in the configuration.

            • VocabularyName (string) --

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide .

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with *** , choose mask .

              To delete words, choose remove .

              To flag words without changing them, choose tag .

            • LanguageModelName (string) --

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              Specifies the level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults (boolean) --

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings (dict) --

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocation (string) --

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArn (string) --

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide .

              • ContentRedactionOutput (string) --

                The content redaction output settings for a post-call analysis task.

              • OutputEncryptionKMSKeyId (string) --

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories (list) --

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

              • (string) --

          • AmazonTranscribeProcessorConfiguration (dict) --

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode (string) --

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide .

            • VocabularyName (string) --

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName (string) --

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod (string) --

              The vocabulary filtering method used in your Call Analytics transcription.

            • ShowSpeakerLabel (boolean) --

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide .

            • EnablePartialResultsStabilization (boolean) --

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • PartialResultsStability (string) --

              The level of stability to use when you enable partial results stabilization ( EnablePartialResultsStabilization ).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide .

            • ContentIdentificationType (string) --

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • ContentRedactionType (string) --

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException .

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide .

            • PiiEntityTypes (string) --

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL .

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType , but you can't include both.

              Values must be comma-separated and can include: ADDRESS , BANK_ACCOUNT_NUMBER , BANK_ROUTING , CREDIT_DEBIT_CVV , CREDIT_DEBIT_EXPIRY , CREDIT_DEBIT_NUMBER , EMAIL , NAME , PHONE , PIN , SSN , or ALL .

              If you leave this parameter empty, the default behavior is equivalent to ALL .

            • LanguageModelName (string) --

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide .

            • FilterPartialResults (boolean) --

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage (boolean) --

              Turns language identification on or off.

            • LanguageOptions (string) --

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage (string) --

              The preferred language for the transcription.

            • VocabularyNames (string) --

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames (string) --

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration (dict) --

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • S3RecordingSinkConfiguration (dict) --

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination (string) --

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat (string) --

              The default file format for the media files sent to the Amazon S3 bucket.

          • VoiceAnalyticsProcessorConfiguration (dict) --

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus (string) --

              The status of the speaker search task.

            • VoiceToneAnalysisStatus (string) --

              The status of the voice tone analysis task.

          • LambdaFunctionSinkConfiguration (dict) --

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the sink.

          • SqsQueueSinkConfiguration (dict) --

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration (dict) --

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget (string) --

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration (dict) --

            The configuration settings for the VoiceEnhancementSinkConfiguration element.

            • Disabled (boolean) --

              Disables the VoiceEnhancementSinkConfiguration element.

      • MediaInsightsPipelineConfigurationId (string) --

        The ID of the configuration.

      • CreatedTimestamp (datetime) --

        The time at which the configuration was created.

      • UpdatedTimestamp (datetime) --

        The time at which the configuration was last updated.