Agents for Amazon Bedrock

2024/10/25 - Agents for Amazon Bedrock - 9 updated api methods

Changes  Add support of new model types for Bedrock Agents, Adding inference profile support for Flows and Prompt Management, Adding new field to configure additional inference configurations for Flows and Prompt Management

CreateFlow (updated) Link ¶
Changes (both)
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'additionalModelRequestFields': {}}}}}}}}

Creates a prompt flow that you can use to send an input through various steps to yield an output. Configure nodes, each of which corresponds to a step of the flow, and create connections between the nodes to create paths to different outputs. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.create_flow(
    clientToken='string',
    customerEncryptionKeyArn='string',
    definition={
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {}
                    ,
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {}
                    ,
                    'iterator': {}
                    ,
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {}
                    ,
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    description='string',
    executionRoleArn='string',
    name='string',
    tags={
        'string': 'string'
    }
)
type clientToken:

string

param clientToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

This field is autopopulated if not provided.

type customerEncryptionKeyArn:

string

param customerEncryptionKeyArn:

The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.

type definition:

dict

param definition:

A definition of the nodes and connections between nodes in the flow.

  • connections (list) --

    An array of connection definitions in the flow.

    • (dict) --

      Contains information about a connection between two nodes in the flow.

      • configuration (dict) --

        The configuration of the connection.

        • conditional (dict) --

          The configuration of a connection originating from a Condition node.

          • condition (string) -- [REQUIRED]

            The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

        • data (dict) --

          The configuration of a connection originating from a node that isn't a Condition node.

          • sourceOutput (string) -- [REQUIRED]

            The name of the output in the source node that the connection begins from.

          • targetInput (string) -- [REQUIRED]

            The name of the input in the target node that the connection ends at.

      • name (string) -- [REQUIRED]

        A name for the connection that you can reference.

      • source (string) -- [REQUIRED]

        The node that the connection starts at.

      • target (string) -- [REQUIRED]

        The node that the connection ends at.

      • type (string) -- [REQUIRED]

        Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

  • nodes (list) --

    An array of node definitions in the flow.

    • (dict) --

      Contains configurations about a node in the flow.

      • configuration (dict) --

        Contains configurations for the node.

        • agent (dict) --

          Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

          • agentAliasArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the alias of the agent to invoke.

        • collector (dict) --

          Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

        • condition (dict) --

          Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

          • conditions (list) -- [REQUIRED]

            An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

            • (dict) --

              Defines a condition in the condition node.

              • expression (string) --

                Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

              • name (string) -- [REQUIRED]

                A name for the condition that you can reference.

        • input (dict) --

          Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

        • iterator (dict) --

          Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

          The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

        • knowledgeBase (dict) --

          Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

          • knowledgeBaseId (string) -- [REQUIRED]

            The unique identifier of the knowledge base to query.

          • modelId (string) --

            The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

        • lambdaFunction (dict) --

          Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

          • lambdaArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the Lambda function to invoke.

        • lex (dict) --

          Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

          • botAliasArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

          • localeId (string) -- [REQUIRED]

            The Region to invoke the Amazon Lex bot in.

        • output (dict) --

          Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

        • prompt (dict) --

          Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

          • sourceConfiguration (dict) -- [REQUIRED]

            Specifies whether the prompt is from Prompt management or defined inline.

            • inline (dict) --

              Contains configurations for a prompt that is defined inline

              • additionalModelRequestFields (:ref:`document<document>`) --

                Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

              • inferenceConfiguration (dict) --

                Contains inference configurations for the prompt.

                • text (dict) --

                  Contains inference configurations for a text prompt.

                  • maxTokens (integer) --

                    The maximum number of tokens to return in the response.

                  • stopSequences (list) --

                    A list of strings that define sequences after which the model will stop generating.

                    • (string) --

                  • temperature (float) --

                    Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                  • topP (float) --

                    The percentage of most-likely candidates that the model considers for the next token.

              • modelId (string) -- [REQUIRED]

                The unique identifier of the model or inference profile to run inference with.

              • templateConfiguration (dict) -- [REQUIRED]

                Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                • text (dict) --

                  Contains configurations for the text in a message for a prompt.

                  • inputVariables (list) --

                    An array of the variables in the prompt template.

                    • (dict) --

                      Contains information about a variable in the prompt.

                      • name (string) --

                        The name of the variable.

                  • text (string) -- [REQUIRED]

                    The message for the prompt.

              • templateType (string) -- [REQUIRED]

                The type of prompt template.

            • resource (dict) --

              Contains configurations for a prompt from Prompt management.

              • promptArn (string) -- [REQUIRED]

                The Amazon Resource Name (ARN) of the prompt from Prompt management.

        • retrieval (dict) --

          Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

          • serviceConfiguration (dict) -- [REQUIRED]

            Contains configurations for the service to use for retrieving data to return as the output from the node.

            • s3 (dict) --

              Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

              • bucketName (string) -- [REQUIRED]

                The name of the Amazon S3 bucket from which to retrieve data.

        • storage (dict) --

          Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

          • serviceConfiguration (dict) -- [REQUIRED]

            Contains configurations for the service to use for storing the input into the node.

            • s3 (dict) --

              Contains configurations for the Amazon S3 location in which to store the input into the node.

              • bucketName (string) -- [REQUIRED]

                The name of the Amazon S3 bucket in which to store the input into the node.

      • inputs (list) --

        An array of objects, each of which contains information about an input into the node.

        • (dict) --

          Contains configurations for an input to a node.

          • expression (string) -- [REQUIRED]

            An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

          • name (string) -- [REQUIRED]

            A name for the input that you can reference.

          • type (string) -- [REQUIRED]

            The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

      • name (string) -- [REQUIRED]

        A name for the node.

      • outputs (list) --

        A list of objects, each of which contains information about an output from the node.

        • (dict) --

          Contains configurations for an output from a node.

          • name (string) -- [REQUIRED]

            A name for the output that you can reference.

          • type (string) -- [REQUIRED]

            The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

      • type (string) -- [REQUIRED]

        The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

type description:

string

param description:

A description for the flow.

type executionRoleArn:

string

param executionRoleArn:

[REQUIRED]

The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

type name:

string

param name:

[REQUIRED]

A name for the flow.

type tags:

dict

param tags:

Any tags that you want to attach to the flow. For more information, see Tagging resources in Amazon Bedrock.

  • (string) --

    • (string) --

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'definition': {
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {},
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {},
                    'iterator': {},
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {},
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    'description': 'string',
    'executionRoleArn': 'string',
    'id': 'string',
    'name': 'string',
    'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
    'updatedAt': datetime(2015, 1, 1),
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the flow.

    • createdAt (datetime) --

      The time at which the flow was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that you encrypted the flow with.

    • definition (dict) --

      A definition of the nodes and connections between nodes in the flow.

      • connections (list) --

        An array of connection definitions in the flow.

        • (dict) --

          Contains information about a connection between two nodes in the flow.

          • configuration (dict) --

            The configuration of the connection.

            • conditional (dict) --

              The configuration of a connection originating from a Condition node.

              • condition (string) --

                The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

            • data (dict) --

              The configuration of a connection originating from a node that isn't a Condition node.

              • sourceOutput (string) --

                The name of the output in the source node that the connection begins from.

              • targetInput (string) --

                The name of the input in the target node that the connection ends at.

          • name (string) --

            A name for the connection that you can reference.

          • source (string) --

            The node that the connection starts at.

          • target (string) --

            The node that the connection ends at.

          • type (string) --

            Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

      • nodes (list) --

        An array of node definitions in the flow.

        • (dict) --

          Contains configurations about a node in the flow.

          • configuration (dict) --

            Contains configurations for the node.

            • agent (dict) --

              Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

              • agentAliasArn (string) --

                The Amazon Resource Name (ARN) of the alias of the agent to invoke.

            • collector (dict) --

              Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

            • condition (dict) --

              Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

              • conditions (list) --

                An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

                • (dict) --

                  Defines a condition in the condition node.

                  • expression (string) --

                    Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

                  • name (string) --

                    A name for the condition that you can reference.

            • input (dict) --

              Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

            • iterator (dict) --

              Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

              The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

            • knowledgeBase (dict) --

              Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

              • knowledgeBaseId (string) --

                The unique identifier of the knowledge base to query.

              • modelId (string) --

                The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

            • lambdaFunction (dict) --

              Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

              • lambdaArn (string) --

                The Amazon Resource Name (ARN) of the Lambda function to invoke.

            • lex (dict) --

              Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

              • botAliasArn (string) --

                The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

              • localeId (string) --

                The Region to invoke the Amazon Lex bot in.

            • output (dict) --

              Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

            • prompt (dict) --

              Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

              • sourceConfiguration (dict) --

                Specifies whether the prompt is from Prompt management or defined inline.

                • inline (dict) --

                  Contains configurations for a prompt that is defined inline

                  • additionalModelRequestFields (:ref:`document<document>`) --

                    Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

                  • inferenceConfiguration (dict) --

                    Contains inference configurations for the prompt.

                    • text (dict) --

                      Contains inference configurations for a text prompt.

                      • maxTokens (integer) --

                        The maximum number of tokens to return in the response.

                      • stopSequences (list) --

                        A list of strings that define sequences after which the model will stop generating.

                        • (string) --

                      • temperature (float) --

                        Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                      • topP (float) --

                        The percentage of most-likely candidates that the model considers for the next token.

                  • modelId (string) --

                    The unique identifier of the model or inference profile to run inference with.

                  • templateConfiguration (dict) --

                    Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                    • text (dict) --

                      Contains configurations for the text in a message for a prompt.

                      • inputVariables (list) --

                        An array of the variables in the prompt template.

                        • (dict) --

                          Contains information about a variable in the prompt.

                          • name (string) --

                            The name of the variable.

                      • text (string) --

                        The message for the prompt.

                  • templateType (string) --

                    The type of prompt template.

                • resource (dict) --

                  Contains configurations for a prompt from Prompt management.

                  • promptArn (string) --

                    The Amazon Resource Name (ARN) of the prompt from Prompt management.

            • retrieval (dict) --

              Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for retrieving data to return as the output from the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket from which to retrieve data.

            • storage (dict) --

              Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for storing the input into the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location in which to store the input into the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket in which to store the input into the node.

          • inputs (list) --

            An array of objects, each of which contains information about an input into the node.

            • (dict) --

              Contains configurations for an input to a node.

              • expression (string) --

                An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

              • name (string) --

                A name for the input that you can reference.

              • type (string) --

                The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

          • name (string) --

            A name for the node.

          • outputs (list) --

            A list of objects, each of which contains information about an output from the node.

            • (dict) --

              Contains configurations for an output from a node.

              • name (string) --

                A name for the output that you can reference.

              • type (string) --

                The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

          • type (string) --

            The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

    • description (string) --

      The description of the flow.

    • executionRoleArn (string) --

      The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

    • id (string) --

      The unique identifier of the flow.

    • name (string) --

      The name of the flow.

    • status (string) --

      The status of the flow. When you submit this request, the status will be NotPrepared. If creation fails, the status becomes Failed.

    • updatedAt (datetime) --

      The time at which the flow was last updated.

    • version (string) --

      The version of the flow. When you create a flow, the version created is the DRAFT version.

CreateFlowVersion (updated) Link ¶
Changes (response)
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'additionalModelRequestFields': {}}}}}}}}

Creates a version of the flow that you can deploy. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.create_flow_version(
    clientToken='string',
    description='string',
    flowIdentifier='string'
)
type clientToken:

string

param clientToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

This field is autopopulated if not provided.

type description:

string

param description:

A description of the version of the flow.

type flowIdentifier:

string

param flowIdentifier:

[REQUIRED]

The unique identifier of the flow that you want to create a version of.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'definition': {
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {},
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {},
                    'iterator': {},
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {},
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    'description': 'string',
    'executionRoleArn': 'string',
    'id': 'string',
    'name': 'string',
    'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the flow.

    • createdAt (datetime) --

      The time at which the flow was created.

    • customerEncryptionKeyArn (string) --

      The KMS key that the flow is encrypted with.

    • definition (dict) --

      A definition of the nodes and connections in the flow.

      • connections (list) --

        An array of connection definitions in the flow.

        • (dict) --

          Contains information about a connection between two nodes in the flow.

          • configuration (dict) --

            The configuration of the connection.

            • conditional (dict) --

              The configuration of a connection originating from a Condition node.

              • condition (string) --

                The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

            • data (dict) --

              The configuration of a connection originating from a node that isn't a Condition node.

              • sourceOutput (string) --

                The name of the output in the source node that the connection begins from.

              • targetInput (string) --

                The name of the input in the target node that the connection ends at.

          • name (string) --

            A name for the connection that you can reference.

          • source (string) --

            The node that the connection starts at.

          • target (string) --

            The node that the connection ends at.

          • type (string) --

            Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

      • nodes (list) --

        An array of node definitions in the flow.

        • (dict) --

          Contains configurations about a node in the flow.

          • configuration (dict) --

            Contains configurations for the node.

            • agent (dict) --

              Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

              • agentAliasArn (string) --

                The Amazon Resource Name (ARN) of the alias of the agent to invoke.

            • collector (dict) --

              Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

            • condition (dict) --

              Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

              • conditions (list) --

                An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

                • (dict) --

                  Defines a condition in the condition node.

                  • expression (string) --

                    Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

                  • name (string) --

                    A name for the condition that you can reference.

            • input (dict) --

              Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

            • iterator (dict) --

              Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

              The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

            • knowledgeBase (dict) --

              Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

              • knowledgeBaseId (string) --

                The unique identifier of the knowledge base to query.

              • modelId (string) --

                The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

            • lambdaFunction (dict) --

              Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

              • lambdaArn (string) --

                The Amazon Resource Name (ARN) of the Lambda function to invoke.

            • lex (dict) --

              Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

              • botAliasArn (string) --

                The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

              • localeId (string) --

                The Region to invoke the Amazon Lex bot in.

            • output (dict) --

              Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

            • prompt (dict) --

              Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

              • sourceConfiguration (dict) --

                Specifies whether the prompt is from Prompt management or defined inline.

                • inline (dict) --

                  Contains configurations for a prompt that is defined inline

                  • additionalModelRequestFields (:ref:`document<document>`) --

                    Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

                  • inferenceConfiguration (dict) --

                    Contains inference configurations for the prompt.

                    • text (dict) --

                      Contains inference configurations for a text prompt.

                      • maxTokens (integer) --

                        The maximum number of tokens to return in the response.

                      • stopSequences (list) --

                        A list of strings that define sequences after which the model will stop generating.

                        • (string) --

                      • temperature (float) --

                        Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                      • topP (float) --

                        The percentage of most-likely candidates that the model considers for the next token.

                  • modelId (string) --

                    The unique identifier of the model or inference profile to run inference with.

                  • templateConfiguration (dict) --

                    Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                    • text (dict) --

                      Contains configurations for the text in a message for a prompt.

                      • inputVariables (list) --

                        An array of the variables in the prompt template.

                        • (dict) --

                          Contains information about a variable in the prompt.

                          • name (string) --

                            The name of the variable.

                      • text (string) --

                        The message for the prompt.

                  • templateType (string) --

                    The type of prompt template.

                • resource (dict) --

                  Contains configurations for a prompt from Prompt management.

                  • promptArn (string) --

                    The Amazon Resource Name (ARN) of the prompt from Prompt management.

            • retrieval (dict) --

              Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for retrieving data to return as the output from the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket from which to retrieve data.

            • storage (dict) --

              Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for storing the input into the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location in which to store the input into the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket in which to store the input into the node.

          • inputs (list) --

            An array of objects, each of which contains information about an input into the node.

            • (dict) --

              Contains configurations for an input to a node.

              • expression (string) --

                An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

              • name (string) --

                A name for the input that you can reference.

              • type (string) --

                The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

          • name (string) --

            A name for the node.

          • outputs (list) --

            A list of objects, each of which contains information about an output from the node.

            • (dict) --

              Contains configurations for an output from a node.

              • name (string) --

                A name for the output that you can reference.

              • type (string) --

                The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

          • type (string) --

            The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

    • description (string) --

      The description of the version.

    • executionRoleArn (string) --

      The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

    • id (string) --

      The unique identifier of the flow.

    • name (string) --

      The name of the version.

    • status (string) --

      The status of the flow.

    • version (string) --

      The version of the flow that was created. Versions are numbered incrementally, starting from 1.

CreatePrompt (updated) Link ¶
Changes (both)
{'variants': {'additionalModelRequestFields': {}}}

Creates a prompt in your prompt library that you can add to a flow. For more information, see Prompt management in Amazon Bedrock, Create a prompt using Prompt management and Prompt flows in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.create_prompt(
    clientToken='string',
    customerEncryptionKeyArn='string',
    defaultVariant='string',
    description='string',
    name='string',
    tags={
        'string': 'string'
    },
    variants=[
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ]
)
type clientToken:

string

param clientToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

This field is autopopulated if not provided.

type customerEncryptionKeyArn:

string

param customerEncryptionKeyArn:

The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.

type defaultVariant:

string

param defaultVariant:

The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

type description:

string

param description:

A description for the prompt.

type name:

string

param name:

[REQUIRED]

A name for the prompt.

type tags:

dict

param tags:

Any tags that you want to attach to the prompt. For more information, see Tagging resources in Amazon Bedrock.

  • (string) --

    • (string) --

type variants:

list

param variants:

A list of objects, each containing details about a variant of the prompt.

  • (dict) --

    Contains details about a variant of the prompt.

    • additionalModelRequestFields (:ref:`document<document>`) --

      Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

    • inferenceConfiguration (dict) --

      Contains inference configurations for the prompt variant.

      • text (dict) --

        Contains inference configurations for a text prompt.

        • maxTokens (integer) --

          The maximum number of tokens to return in the response.

        • stopSequences (list) --

          A list of strings that define sequences after which the model will stop generating.

          • (string) --

        • temperature (float) --

          Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

        • topP (float) --

          The percentage of most-likely candidates that the model considers for the next token.

    • metadata (list) --

      An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

      • (dict) --

        Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

        • key (string) -- [REQUIRED]

          The key of a metadata tag for a prompt variant.

        • value (string) -- [REQUIRED]

          The value of a metadata tag for a prompt variant.

    • modelId (string) --

      The unique identifier of the model or inference profile with which to run inference on the prompt.

    • name (string) -- [REQUIRED]

      The name of the prompt variant.

    • templateConfiguration (dict) -- [REQUIRED]

      Contains configurations for the prompt template.

      • text (dict) --

        Contains configurations for the text in a message for a prompt.

        • inputVariables (list) --

          An array of the variables in the prompt template.

          • (dict) --

            Contains information about a variable in the prompt.

            • name (string) --

              The name of the variable.

        • text (string) -- [REQUIRED]

          The message for the prompt.

    • templateType (string) -- [REQUIRED]

      The type of prompt template to use.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'defaultVariant': 'string',
    'description': 'string',
    'id': 'string',
    'name': 'string',
    'updatedAt': datetime(2015, 1, 1),
    'variants': [
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the prompt.

    • createdAt (datetime) --

      The time at which the prompt was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that you encrypted the prompt with.

    • defaultVariant (string) --

      The name of the default variant for your prompt.

    • description (string) --

      The description of the prompt.

    • id (string) --

      The unique identifier of the prompt.

    • name (string) --

      The name of the prompt.

    • updatedAt (datetime) --

      The time at which the prompt was last updated.

    • variants (list) --

      A list of objects, each containing details about a variant of the prompt.

      • (dict) --

        Contains details about a variant of the prompt.

        • additionalModelRequestFields (:ref:`document<document>`) --

          Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

        • inferenceConfiguration (dict) --

          Contains inference configurations for the prompt variant.

          • text (dict) --

            Contains inference configurations for a text prompt.

            • maxTokens (integer) --

              The maximum number of tokens to return in the response.

            • stopSequences (list) --

              A list of strings that define sequences after which the model will stop generating.

              • (string) --

            • temperature (float) --

              Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

            • topP (float) --

              The percentage of most-likely candidates that the model considers for the next token.

        • metadata (list) --

          An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

          • (dict) --

            Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

            • key (string) --

              The key of a metadata tag for a prompt variant.

            • value (string) --

              The value of a metadata tag for a prompt variant.

        • modelId (string) --

          The unique identifier of the model or inference profile with which to run inference on the prompt.

        • name (string) --

          The name of the prompt variant.

        • templateConfiguration (dict) --

          Contains configurations for the prompt template.

          • text (dict) --

            Contains configurations for the text in a message for a prompt.

            • inputVariables (list) --

              An array of the variables in the prompt template.

              • (dict) --

                Contains information about a variable in the prompt.

                • name (string) --

                  The name of the variable.

            • text (string) --

              The message for the prompt.

        • templateType (string) --

          The type of prompt template to use.

    • version (string) --

      The version of the prompt. When you create a prompt, the version created is the DRAFT version.

CreatePromptVersion (updated) Link ¶
Changes (response)
{'variants': {'additionalModelRequestFields': {}}}

Creates a static snapshot of your prompt that can be deployed to production. For more information, see Deploy prompts using Prompt management by creating versions in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.create_prompt_version(
    clientToken='string',
    description='string',
    promptIdentifier='string',
    tags={
        'string': 'string'
    }
)
type clientToken:

string

param clientToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

This field is autopopulated if not provided.

type description:

string

param description:

A description for the version of the prompt.

type promptIdentifier:

string

param promptIdentifier:

[REQUIRED]

The unique identifier of the prompt that you want to create a version of.

type tags:

dict

param tags:

Any tags that you want to attach to the version of the prompt. For more information, see Tagging resources in Amazon Bedrock.

  • (string) --

    • (string) --

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'defaultVariant': 'string',
    'description': 'string',
    'id': 'string',
    'name': 'string',
    'updatedAt': datetime(2015, 1, 1),
    'variants': [
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the version of the prompt.

    • createdAt (datetime) --

      The time at which the prompt was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key to encrypt the version of the prompt.

    • defaultVariant (string) --

      The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

    • description (string) --

      A description for the version.

    • id (string) --

      The unique identifier of the prompt.

    • name (string) --

      The name of the prompt.

    • updatedAt (datetime) --

      The time at which the prompt was last updated.

    • variants (list) --

      A list of objects, each containing details about a variant of the prompt.

      • (dict) --

        Contains details about a variant of the prompt.

        • additionalModelRequestFields (:ref:`document<document>`) --

          Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

        • inferenceConfiguration (dict) --

          Contains inference configurations for the prompt variant.

          • text (dict) --

            Contains inference configurations for a text prompt.

            • maxTokens (integer) --

              The maximum number of tokens to return in the response.

            • stopSequences (list) --

              A list of strings that define sequences after which the model will stop generating.

              • (string) --

            • temperature (float) --

              Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

            • topP (float) --

              The percentage of most-likely candidates that the model considers for the next token.

        • metadata (list) --

          An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

          • (dict) --

            Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

            • key (string) --

              The key of a metadata tag for a prompt variant.

            • value (string) --

              The value of a metadata tag for a prompt variant.

        • modelId (string) --

          The unique identifier of the model or inference profile with which to run inference on the prompt.

        • name (string) --

          The name of the prompt variant.

        • templateConfiguration (dict) --

          Contains configurations for the prompt template.

          • text (dict) --

            Contains configurations for the text in a message for a prompt.

            • inputVariables (list) --

              An array of the variables in the prompt template.

              • (dict) --

                Contains information about a variable in the prompt.

                • name (string) --

                  The name of the variable.

            • text (string) --

              The message for the prompt.

        • templateType (string) --

          The type of prompt template to use.

    • version (string) --

      The version of the prompt that was created. Versions are numbered incrementally, starting from 1.

GetFlow (updated) Link ¶
Changes (response)
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'additionalModelRequestFields': {}}}}}}}}

Retrieves information about a flow. For more information, see Manage a flow in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.get_flow(
    flowIdentifier='string'
)
type flowIdentifier:

string

param flowIdentifier:

[REQUIRED]

The unique identifier of the flow.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'definition': {
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {},
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {},
                    'iterator': {},
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {},
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    'description': 'string',
    'executionRoleArn': 'string',
    'id': 'string',
    'name': 'string',
    'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
    'updatedAt': datetime(2015, 1, 1),
    'validations': [
        {
            'message': 'string',
            'severity': 'Warning'|'Error'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the flow.

    • createdAt (datetime) --

      The time at which the flow was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that the flow is encrypted with.

    • definition (dict) --

      The definition of the nodes and connections between the nodes in the flow.

      • connections (list) --

        An array of connection definitions in the flow.

        • (dict) --

          Contains information about a connection between two nodes in the flow.

          • configuration (dict) --

            The configuration of the connection.

            • conditional (dict) --

              The configuration of a connection originating from a Condition node.

              • condition (string) --

                The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

            • data (dict) --

              The configuration of a connection originating from a node that isn't a Condition node.

              • sourceOutput (string) --

                The name of the output in the source node that the connection begins from.

              • targetInput (string) --

                The name of the input in the target node that the connection ends at.

          • name (string) --

            A name for the connection that you can reference.

          • source (string) --

            The node that the connection starts at.

          • target (string) --

            The node that the connection ends at.

          • type (string) --

            Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

      • nodes (list) --

        An array of node definitions in the flow.

        • (dict) --

          Contains configurations about a node in the flow.

          • configuration (dict) --

            Contains configurations for the node.

            • agent (dict) --

              Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

              • agentAliasArn (string) --

                The Amazon Resource Name (ARN) of the alias of the agent to invoke.

            • collector (dict) --

              Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

            • condition (dict) --

              Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

              • conditions (list) --

                An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

                • (dict) --

                  Defines a condition in the condition node.

                  • expression (string) --

                    Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

                  • name (string) --

                    A name for the condition that you can reference.

            • input (dict) --

              Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

            • iterator (dict) --

              Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

              The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

            • knowledgeBase (dict) --

              Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

              • knowledgeBaseId (string) --

                The unique identifier of the knowledge base to query.

              • modelId (string) --

                The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

            • lambdaFunction (dict) --

              Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

              • lambdaArn (string) --

                The Amazon Resource Name (ARN) of the Lambda function to invoke.

            • lex (dict) --

              Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

              • botAliasArn (string) --

                The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

              • localeId (string) --

                The Region to invoke the Amazon Lex bot in.

            • output (dict) --

              Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

            • prompt (dict) --

              Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

              • sourceConfiguration (dict) --

                Specifies whether the prompt is from Prompt management or defined inline.

                • inline (dict) --

                  Contains configurations for a prompt that is defined inline

                  • additionalModelRequestFields (:ref:`document<document>`) --

                    Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

                  • inferenceConfiguration (dict) --

                    Contains inference configurations for the prompt.

                    • text (dict) --

                      Contains inference configurations for a text prompt.

                      • maxTokens (integer) --

                        The maximum number of tokens to return in the response.

                      • stopSequences (list) --

                        A list of strings that define sequences after which the model will stop generating.

                        • (string) --

                      • temperature (float) --

                        Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                      • topP (float) --

                        The percentage of most-likely candidates that the model considers for the next token.

                  • modelId (string) --

                    The unique identifier of the model or inference profile to run inference with.

                  • templateConfiguration (dict) --

                    Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                    • text (dict) --

                      Contains configurations for the text in a message for a prompt.

                      • inputVariables (list) --

                        An array of the variables in the prompt template.

                        • (dict) --

                          Contains information about a variable in the prompt.

                          • name (string) --

                            The name of the variable.

                      • text (string) --

                        The message for the prompt.

                  • templateType (string) --

                    The type of prompt template.

                • resource (dict) --

                  Contains configurations for a prompt from Prompt management.

                  • promptArn (string) --

                    The Amazon Resource Name (ARN) of the prompt from Prompt management.

            • retrieval (dict) --

              Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for retrieving data to return as the output from the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket from which to retrieve data.

            • storage (dict) --

              Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for storing the input into the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location in which to store the input into the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket in which to store the input into the node.

          • inputs (list) --

            An array of objects, each of which contains information about an input into the node.

            • (dict) --

              Contains configurations for an input to a node.

              • expression (string) --

                An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

              • name (string) --

                A name for the input that you can reference.

              • type (string) --

                The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

          • name (string) --

            A name for the node.

          • outputs (list) --

            A list of objects, each of which contains information about an output from the node.

            • (dict) --

              Contains configurations for an output from a node.

              • name (string) --

                A name for the output that you can reference.

              • type (string) --

                The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

          • type (string) --

            The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

    • description (string) --

      The description of the flow.

    • executionRoleArn (string) --

      The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service row for flows in the Amazon Bedrock User Guide.

    • id (string) --

      The unique identifier of the flow.

    • name (string) --

      The name of the flow.

    • status (string) --

      The status of the flow. The following statuses are possible:

      • NotPrepared – The flow has been created or updated, but hasn't been prepared. If you just created the flow, you can't test it. If you updated the flow, the DRAFT version won't contain the latest changes for testing. Send a PrepareFlow request to package the latest changes into the DRAFT version.

      • Preparing – The flow is being prepared so that the DRAFT version contains the latest changes for testing.

      • Prepared – The flow is prepared and the DRAFT version contains the latest changes for testing.

      • Failed – The last API operation that you invoked on the flow failed. Send a GetFlow request and check the error message in the validations field.

    • updatedAt (datetime) --

      The time at which the flow was last updated.

    • validations (list) --

      A list of validation error messages related to the last failed operation on the flow.

      • (dict) --

        Contains information about validation of the flow.

        This data type is used in the following API operations:

        • message (string) --

          A message describing the validation error.

        • severity (string) --

          The severity of the issue described in the message.

    • version (string) --

      The version of the flow for which information was retrieved.

GetFlowVersion (updated) Link ¶
Changes (response)
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'additionalModelRequestFields': {}}}}}}}}

Retrieves information about a version of a flow. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.get_flow_version(
    flowIdentifier='string',
    flowVersion='string'
)
type flowIdentifier:

string

param flowIdentifier:

[REQUIRED]

The unique identifier of the flow for which to get information.

type flowVersion:

string

param flowVersion:

[REQUIRED]

The version of the flow for which to get information.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'definition': {
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {},
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {},
                    'iterator': {},
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {},
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    'description': 'string',
    'executionRoleArn': 'string',
    'id': 'string',
    'name': 'string',
    'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the flow.

    • createdAt (datetime) --

      The time at which the flow was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that the version of the flow is encrypted with.

    • definition (dict) --

      The definition of the nodes and connections between nodes in the flow.

      • connections (list) --

        An array of connection definitions in the flow.

        • (dict) --

          Contains information about a connection between two nodes in the flow.

          • configuration (dict) --

            The configuration of the connection.

            • conditional (dict) --

              The configuration of a connection originating from a Condition node.

              • condition (string) --

                The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

            • data (dict) --

              The configuration of a connection originating from a node that isn't a Condition node.

              • sourceOutput (string) --

                The name of the output in the source node that the connection begins from.

              • targetInput (string) --

                The name of the input in the target node that the connection ends at.

          • name (string) --

            A name for the connection that you can reference.

          • source (string) --

            The node that the connection starts at.

          • target (string) --

            The node that the connection ends at.

          • type (string) --

            Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

      • nodes (list) --

        An array of node definitions in the flow.

        • (dict) --

          Contains configurations about a node in the flow.

          • configuration (dict) --

            Contains configurations for the node.

            • agent (dict) --

              Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

              • agentAliasArn (string) --

                The Amazon Resource Name (ARN) of the alias of the agent to invoke.

            • collector (dict) --

              Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

            • condition (dict) --

              Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

              • conditions (list) --

                An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

                • (dict) --

                  Defines a condition in the condition node.

                  • expression (string) --

                    Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

                  • name (string) --

                    A name for the condition that you can reference.

            • input (dict) --

              Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

            • iterator (dict) --

              Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

              The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

            • knowledgeBase (dict) --

              Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

              • knowledgeBaseId (string) --

                The unique identifier of the knowledge base to query.

              • modelId (string) --

                The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

            • lambdaFunction (dict) --

              Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

              • lambdaArn (string) --

                The Amazon Resource Name (ARN) of the Lambda function to invoke.

            • lex (dict) --

              Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

              • botAliasArn (string) --

                The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

              • localeId (string) --

                The Region to invoke the Amazon Lex bot in.

            • output (dict) --

              Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

            • prompt (dict) --

              Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

              • sourceConfiguration (dict) --

                Specifies whether the prompt is from Prompt management or defined inline.

                • inline (dict) --

                  Contains configurations for a prompt that is defined inline

                  • additionalModelRequestFields (:ref:`document<document>`) --

                    Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

                  • inferenceConfiguration (dict) --

                    Contains inference configurations for the prompt.

                    • text (dict) --

                      Contains inference configurations for a text prompt.

                      • maxTokens (integer) --

                        The maximum number of tokens to return in the response.

                      • stopSequences (list) --

                        A list of strings that define sequences after which the model will stop generating.

                        • (string) --

                      • temperature (float) --

                        Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                      • topP (float) --

                        The percentage of most-likely candidates that the model considers for the next token.

                  • modelId (string) --

                    The unique identifier of the model or inference profile to run inference with.

                  • templateConfiguration (dict) --

                    Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                    • text (dict) --

                      Contains configurations for the text in a message for a prompt.

                      • inputVariables (list) --

                        An array of the variables in the prompt template.

                        • (dict) --

                          Contains information about a variable in the prompt.

                          • name (string) --

                            The name of the variable.

                      • text (string) --

                        The message for the prompt.

                  • templateType (string) --

                    The type of prompt template.

                • resource (dict) --

                  Contains configurations for a prompt from Prompt management.

                  • promptArn (string) --

                    The Amazon Resource Name (ARN) of the prompt from Prompt management.

            • retrieval (dict) --

              Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for retrieving data to return as the output from the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket from which to retrieve data.

            • storage (dict) --

              Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for storing the input into the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location in which to store the input into the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket in which to store the input into the node.

          • inputs (list) --

            An array of objects, each of which contains information about an input into the node.

            • (dict) --

              Contains configurations for an input to a node.

              • expression (string) --

                An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

              • name (string) --

                A name for the input that you can reference.

              • type (string) --

                The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

          • name (string) --

            A name for the node.

          • outputs (list) --

            A list of objects, each of which contains information about an output from the node.

            • (dict) --

              Contains configurations for an output from a node.

              • name (string) --

                A name for the output that you can reference.

              • type (string) --

                The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

          • type (string) --

            The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

    • description (string) --

      The description of the flow.

    • executionRoleArn (string) --

      The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

    • id (string) --

      The unique identifier of the flow.

    • name (string) --

      The name of the version.

    • status (string) --

      The status of the flow.

    • version (string) --

      The version of the flow for which information was retrieved.

GetPrompt (updated) Link ¶
Changes (response)
{'variants': {'additionalModelRequestFields': {}}}

Retrieves information about the working draft ( DRAFT version) of a prompt or a version of it, depending on whether you include the promptVersion field or not. For more information, see View information about prompts using Prompt management and View information about a version of your prompt in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.get_prompt(
    promptIdentifier='string',
    promptVersion='string'
)
type promptIdentifier:

string

param promptIdentifier:

[REQUIRED]

The unique identifier of the prompt.

type promptVersion:

string

param promptVersion:

The version of the prompt about which you want to retrieve information. Omit this field to return information about the working draft of the prompt.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'defaultVariant': 'string',
    'description': 'string',
    'id': 'string',
    'name': 'string',
    'updatedAt': datetime(2015, 1, 1),
    'variants': [
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the prompt or the prompt version (if you specified a version in the request).

    • createdAt (datetime) --

      The time at which the prompt was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that the prompt is encrypted with.

    • defaultVariant (string) --

      The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

    • description (string) --

      The descriptino of the prompt.

    • id (string) --

      The unique identifier of the prompt.

    • name (string) --

      The name of the prompt.

    • updatedAt (datetime) --

      The time at which the prompt was last updated.

    • variants (list) --

      A list of objects, each containing details about a variant of the prompt.

      • (dict) --

        Contains details about a variant of the prompt.

        • additionalModelRequestFields (:ref:`document<document>`) --

          Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

        • inferenceConfiguration (dict) --

          Contains inference configurations for the prompt variant.

          • text (dict) --

            Contains inference configurations for a text prompt.

            • maxTokens (integer) --

              The maximum number of tokens to return in the response.

            • stopSequences (list) --

              A list of strings that define sequences after which the model will stop generating.

              • (string) --

            • temperature (float) --

              Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

            • topP (float) --

              The percentage of most-likely candidates that the model considers for the next token.

        • metadata (list) --

          An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

          • (dict) --

            Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

            • key (string) --

              The key of a metadata tag for a prompt variant.

            • value (string) --

              The value of a metadata tag for a prompt variant.

        • modelId (string) --

          The unique identifier of the model or inference profile with which to run inference on the prompt.

        • name (string) --

          The name of the prompt variant.

        • templateConfiguration (dict) --

          Contains configurations for the prompt template.

          • text (dict) --

            Contains configurations for the text in a message for a prompt.

            • inputVariables (list) --

              An array of the variables in the prompt template.

              • (dict) --

                Contains information about a variable in the prompt.

                • name (string) --

                  The name of the variable.

            • text (string) --

              The message for the prompt.

        • templateType (string) --

          The type of prompt template to use.

    • version (string) --

      The version of the prompt.

UpdateFlow (updated) Link ¶
Changes (both)
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'additionalModelRequestFields': {}}}}}}}}

Modifies a flow. Include both fields that you want to keep and fields that you want to change. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.update_flow(
    customerEncryptionKeyArn='string',
    definition={
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {}
                    ,
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {}
                    ,
                    'iterator': {}
                    ,
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {}
                    ,
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    description='string',
    executionRoleArn='string',
    flowIdentifier='string',
    name='string'
)
type customerEncryptionKeyArn:

string

param customerEncryptionKeyArn:

The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.

type definition:

dict

param definition:

A definition of the nodes and the connections between the nodes in the flow.

  • connections (list) --

    An array of connection definitions in the flow.

    • (dict) --

      Contains information about a connection between two nodes in the flow.

      • configuration (dict) --

        The configuration of the connection.

        • conditional (dict) --

          The configuration of a connection originating from a Condition node.

          • condition (string) -- [REQUIRED]

            The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

        • data (dict) --

          The configuration of a connection originating from a node that isn't a Condition node.

          • sourceOutput (string) -- [REQUIRED]

            The name of the output in the source node that the connection begins from.

          • targetInput (string) -- [REQUIRED]

            The name of the input in the target node that the connection ends at.

      • name (string) -- [REQUIRED]

        A name for the connection that you can reference.

      • source (string) -- [REQUIRED]

        The node that the connection starts at.

      • target (string) -- [REQUIRED]

        The node that the connection ends at.

      • type (string) -- [REQUIRED]

        Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

  • nodes (list) --

    An array of node definitions in the flow.

    • (dict) --

      Contains configurations about a node in the flow.

      • configuration (dict) --

        Contains configurations for the node.

        • agent (dict) --

          Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

          • agentAliasArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the alias of the agent to invoke.

        • collector (dict) --

          Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

        • condition (dict) --

          Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

          • conditions (list) -- [REQUIRED]

            An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

            • (dict) --

              Defines a condition in the condition node.

              • expression (string) --

                Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

              • name (string) -- [REQUIRED]

                A name for the condition that you can reference.

        • input (dict) --

          Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

        • iterator (dict) --

          Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

          The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

        • knowledgeBase (dict) --

          Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

          • knowledgeBaseId (string) -- [REQUIRED]

            The unique identifier of the knowledge base to query.

          • modelId (string) --

            The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

        • lambdaFunction (dict) --

          Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

          • lambdaArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the Lambda function to invoke.

        • lex (dict) --

          Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

          • botAliasArn (string) -- [REQUIRED]

            The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

          • localeId (string) -- [REQUIRED]

            The Region to invoke the Amazon Lex bot in.

        • output (dict) --

          Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

        • prompt (dict) --

          Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

          • sourceConfiguration (dict) -- [REQUIRED]

            Specifies whether the prompt is from Prompt management or defined inline.

            • inline (dict) --

              Contains configurations for a prompt that is defined inline

              • additionalModelRequestFields (:ref:`document<document>`) --

                Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

              • inferenceConfiguration (dict) --

                Contains inference configurations for the prompt.

                • text (dict) --

                  Contains inference configurations for a text prompt.

                  • maxTokens (integer) --

                    The maximum number of tokens to return in the response.

                  • stopSequences (list) --

                    A list of strings that define sequences after which the model will stop generating.

                    • (string) --

                  • temperature (float) --

                    Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                  • topP (float) --

                    The percentage of most-likely candidates that the model considers for the next token.

              • modelId (string) -- [REQUIRED]

                The unique identifier of the model or inference profile to run inference with.

              • templateConfiguration (dict) -- [REQUIRED]

                Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                • text (dict) --

                  Contains configurations for the text in a message for a prompt.

                  • inputVariables (list) --

                    An array of the variables in the prompt template.

                    • (dict) --

                      Contains information about a variable in the prompt.

                      • name (string) --

                        The name of the variable.

                  • text (string) -- [REQUIRED]

                    The message for the prompt.

              • templateType (string) -- [REQUIRED]

                The type of prompt template.

            • resource (dict) --

              Contains configurations for a prompt from Prompt management.

              • promptArn (string) -- [REQUIRED]

                The Amazon Resource Name (ARN) of the prompt from Prompt management.

        • retrieval (dict) --

          Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

          • serviceConfiguration (dict) -- [REQUIRED]

            Contains configurations for the service to use for retrieving data to return as the output from the node.

            • s3 (dict) --

              Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

              • bucketName (string) -- [REQUIRED]

                The name of the Amazon S3 bucket from which to retrieve data.

        • storage (dict) --

          Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

          • serviceConfiguration (dict) -- [REQUIRED]

            Contains configurations for the service to use for storing the input into the node.

            • s3 (dict) --

              Contains configurations for the Amazon S3 location in which to store the input into the node.

              • bucketName (string) -- [REQUIRED]

                The name of the Amazon S3 bucket in which to store the input into the node.

      • inputs (list) --

        An array of objects, each of which contains information about an input into the node.

        • (dict) --

          Contains configurations for an input to a node.

          • expression (string) -- [REQUIRED]

            An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

          • name (string) -- [REQUIRED]

            A name for the input that you can reference.

          • type (string) -- [REQUIRED]

            The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

      • name (string) -- [REQUIRED]

        A name for the node.

      • outputs (list) --

        A list of objects, each of which contains information about an output from the node.

        • (dict) --

          Contains configurations for an output from a node.

          • name (string) -- [REQUIRED]

            A name for the output that you can reference.

          • type (string) -- [REQUIRED]

            The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

      • type (string) -- [REQUIRED]

        The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

type description:

string

param description:

A description for the flow.

type executionRoleArn:

string

param executionRoleArn:

[REQUIRED]

The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

type flowIdentifier:

string

param flowIdentifier:

[REQUIRED]

The unique identifier of the flow.

type name:

string

param name:

[REQUIRED]

A name for the flow.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'definition': {
        'connections': [
            {
                'configuration': {
                    'conditional': {
                        'condition': 'string'
                    },
                    'data': {
                        'sourceOutput': 'string',
                        'targetInput': 'string'
                    }
                },
                'name': 'string',
                'source': 'string',
                'target': 'string',
                'type': 'Data'|'Conditional'
            },
        ],
        'nodes': [
            {
                'configuration': {
                    'agent': {
                        'agentAliasArn': 'string'
                    },
                    'collector': {},
                    'condition': {
                        'conditions': [
                            {
                                'expression': 'string',
                                'name': 'string'
                            },
                        ]
                    },
                    'input': {},
                    'iterator': {},
                    'knowledgeBase': {
                        'knowledgeBaseId': 'string',
                        'modelId': 'string'
                    },
                    'lambdaFunction': {
                        'lambdaArn': 'string'
                    },
                    'lex': {
                        'botAliasArn': 'string',
                        'localeId': 'string'
                    },
                    'output': {},
                    'prompt': {
                        'sourceConfiguration': {
                            'inline': {
                                'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
                                'inferenceConfiguration': {
                                    'text': {
                                        'maxTokens': 123,
                                        'stopSequences': [
                                            'string',
                                        ],
                                        'temperature': ...,
                                        'topP': ...
                                    }
                                },
                                'modelId': 'string',
                                'templateConfiguration': {
                                    'text': {
                                        'inputVariables': [
                                            {
                                                'name': 'string'
                                            },
                                        ],
                                        'text': 'string'
                                    }
                                },
                                'templateType': 'TEXT'
                            },
                            'resource': {
                                'promptArn': 'string'
                            }
                        }
                    },
                    'retrieval': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    },
                    'storage': {
                        'serviceConfiguration': {
                            's3': {
                                'bucketName': 'string'
                            }
                        }
                    }
                },
                'inputs': [
                    {
                        'expression': 'string',
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'name': 'string',
                'outputs': [
                    {
                        'name': 'string',
                        'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
                    },
                ],
                'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'
            },
        ]
    },
    'description': 'string',
    'executionRoleArn': 'string',
    'id': 'string',
    'name': 'string',
    'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
    'updatedAt': datetime(2015, 1, 1),
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the flow.

    • createdAt (datetime) --

      The time at which the flow was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key that the flow was encrypted with.

    • definition (dict) --

      A definition of the nodes and the connections between nodes in the flow.

      • connections (list) --

        An array of connection definitions in the flow.

        • (dict) --

          Contains information about a connection between two nodes in the flow.

          • configuration (dict) --

            The configuration of the connection.

            • conditional (dict) --

              The configuration of a connection originating from a Condition node.

              • condition (string) --

                The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.

            • data (dict) --

              The configuration of a connection originating from a node that isn't a Condition node.

              • sourceOutput (string) --

                The name of the output in the source node that the connection begins from.

              • targetInput (string) --

                The name of the input in the target node that the connection ends at.

          • name (string) --

            A name for the connection that you can reference.

          • source (string) --

            The node that the connection starts at.

          • target (string) --

            The node that the connection ends at.

          • type (string) --

            Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).

      • nodes (list) --

        An array of node definitions in the flow.

        • (dict) --

          Contains configurations about a node in the flow.

          • configuration (dict) --

            Contains configurations for the node.

            • agent (dict) --

              Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.

              • agentAliasArn (string) --

                The Amazon Resource Name (ARN) of the alias of the agent to invoke.

            • collector (dict) --

              Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.

            • condition (dict) --

              Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.

              • conditions (list) --

                An array of conditions. Each member contains the name of a condition and an expression that defines the condition.

                • (dict) --

                  Defines a condition in the condition node.

                  • expression (string) --

                    Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.

                  • name (string) --

                    A name for the condition that you can reference.

            • input (dict) --

              Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.

            • iterator (dict) --

              Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.

              The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.

            • knowledgeBase (dict) --

              Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.

              • knowledgeBaseId (string) --

                The unique identifier of the knowledge base to query.

              • modelId (string) --

                The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.

            • lambdaFunction (dict) --

              Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.

              • lambdaArn (string) --

                The Amazon Resource Name (ARN) of the Lambda function to invoke.

            • lex (dict) --

              Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.

              • botAliasArn (string) --

                The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.

              • localeId (string) --

                The Region to invoke the Amazon Lex bot in.

            • output (dict) --

              Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.

            • prompt (dict) --

              Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.

              • sourceConfiguration (dict) --

                Specifies whether the prompt is from Prompt management or defined inline.

                • inline (dict) --

                  Contains configurations for a prompt that is defined inline

                  • additionalModelRequestFields (:ref:`document<document>`) --

                    Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

                  • inferenceConfiguration (dict) --

                    Contains inference configurations for the prompt.

                    • text (dict) --

                      Contains inference configurations for a text prompt.

                      • maxTokens (integer) --

                        The maximum number of tokens to return in the response.

                      • stopSequences (list) --

                        A list of strings that define sequences after which the model will stop generating.

                        • (string) --

                      • temperature (float) --

                        Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

                      • topP (float) --

                        The percentage of most-likely candidates that the model considers for the next token.

                  • modelId (string) --

                    The unique identifier of the model or inference profile to run inference with.

                  • templateConfiguration (dict) --

                    Contains a prompt and variables in the prompt that can be replaced with values at runtime.

                    • text (dict) --

                      Contains configurations for the text in a message for a prompt.

                      • inputVariables (list) --

                        An array of the variables in the prompt template.

                        • (dict) --

                          Contains information about a variable in the prompt.

                          • name (string) --

                            The name of the variable.

                      • text (string) --

                        The message for the prompt.

                  • templateType (string) --

                    The type of prompt template.

                • resource (dict) --

                  Contains configurations for a prompt from Prompt management.

                  • promptArn (string) --

                    The Amazon Resource Name (ARN) of the prompt from Prompt management.

            • retrieval (dict) --

              Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for retrieving data to return as the output from the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket from which to retrieve data.

            • storage (dict) --

              Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.

              • serviceConfiguration (dict) --

                Contains configurations for the service to use for storing the input into the node.

                • s3 (dict) --

                  Contains configurations for the Amazon S3 location in which to store the input into the node.

                  • bucketName (string) --

                    The name of the Amazon S3 bucket in which to store the input into the node.

          • inputs (list) --

            An array of objects, each of which contains information about an input into the node.

            • (dict) --

              Contains configurations for an input to a node.

              • expression (string) --

                An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.

              • name (string) --

                A name for the input that you can reference.

              • type (string) --

                The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.

          • name (string) --

            A name for the node.

          • outputs (list) --

            A list of objects, each of which contains information about an output from the node.

            • (dict) --

              Contains configurations for an output from a node.

              • name (string) --

                A name for the output that you can reference.

              • type (string) --

                The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.

          • type (string) --

            The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.

    • description (string) --

      The description of the flow.

    • executionRoleArn (string) --

      The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.

    • id (string) --

      The unique identifier of the flow.

    • name (string) --

      The name of the flow.

    • status (string) --

      The status of the flow. When you submit this request, the status will be NotPrepared. If updating fails, the status becomes Failed.

    • updatedAt (datetime) --

      The time at which the flow was last updated.

    • version (string) --

      The version of the flow. When you update a flow, the version updated is the DRAFT version.

UpdatePrompt (updated) Link ¶
Changes (both)
{'variants': {'additionalModelRequestFields': {}}}

Modifies a prompt in your prompt library. Include both fields that you want to keep and fields that you want to replace. For more information, see Prompt management in Amazon Bedrock and Edit prompts in your prompt library in the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.update_prompt(
    customerEncryptionKeyArn='string',
    defaultVariant='string',
    description='string',
    name='string',
    promptIdentifier='string',
    variants=[
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ]
)
type customerEncryptionKeyArn:

string

param customerEncryptionKeyArn:

The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.

type defaultVariant:

string

param defaultVariant:

The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

type description:

string

param description:

A description for the prompt.

type name:

string

param name:

[REQUIRED]

A name for the prompt.

type promptIdentifier:

string

param promptIdentifier:

[REQUIRED]

The unique identifier of the prompt.

type variants:

list

param variants:

A list of objects, each containing details about a variant of the prompt.

  • (dict) --

    Contains details about a variant of the prompt.

    • additionalModelRequestFields (:ref:`document<document>`) --

      Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

    • inferenceConfiguration (dict) --

      Contains inference configurations for the prompt variant.

      • text (dict) --

        Contains inference configurations for a text prompt.

        • maxTokens (integer) --

          The maximum number of tokens to return in the response.

        • stopSequences (list) --

          A list of strings that define sequences after which the model will stop generating.

          • (string) --

        • temperature (float) --

          Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

        • topP (float) --

          The percentage of most-likely candidates that the model considers for the next token.

    • metadata (list) --

      An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

      • (dict) --

        Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

        • key (string) -- [REQUIRED]

          The key of a metadata tag for a prompt variant.

        • value (string) -- [REQUIRED]

          The value of a metadata tag for a prompt variant.

    • modelId (string) --

      The unique identifier of the model or inference profile with which to run inference on the prompt.

    • name (string) -- [REQUIRED]

      The name of the prompt variant.

    • templateConfiguration (dict) -- [REQUIRED]

      Contains configurations for the prompt template.

      • text (dict) --

        Contains configurations for the text in a message for a prompt.

        • inputVariables (list) --

          An array of the variables in the prompt template.

          • (dict) --

            Contains information about a variable in the prompt.

            • name (string) --

              The name of the variable.

        • text (string) -- [REQUIRED]

          The message for the prompt.

    • templateType (string) -- [REQUIRED]

      The type of prompt template to use.

rtype:

dict

returns:

Response Syntax

{
    'arn': 'string',
    'createdAt': datetime(2015, 1, 1),
    'customerEncryptionKeyArn': 'string',
    'defaultVariant': 'string',
    'description': 'string',
    'id': 'string',
    'name': 'string',
    'updatedAt': datetime(2015, 1, 1),
    'variants': [
        {
            'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
            'inferenceConfiguration': {
                'text': {
                    'maxTokens': 123,
                    'stopSequences': [
                        'string',
                    ],
                    'temperature': ...,
                    'topP': ...
                }
            },
            'metadata': [
                {
                    'key': 'string',
                    'value': 'string'
                },
            ],
            'modelId': 'string',
            'name': 'string',
            'templateConfiguration': {
                'text': {
                    'inputVariables': [
                        {
                            'name': 'string'
                        },
                    ],
                    'text': 'string'
                }
            },
            'templateType': 'TEXT'
        },
    ],
    'version': 'string'
}

Response Structure

  • (dict) --

    • arn (string) --

      The Amazon Resource Name (ARN) of the prompt.

    • createdAt (datetime) --

      The time at which the prompt was created.

    • customerEncryptionKeyArn (string) --

      The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.

    • defaultVariant (string) --

      The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.

    • description (string) --

      The description of the prompt.

    • id (string) --

      The unique identifier of the prompt.

    • name (string) --

      The name of the prompt.

    • updatedAt (datetime) --

      The time at which the prompt was last updated.

    • variants (list) --

      A list of objects, each containing details about a variant of the prompt.

      • (dict) --

        Contains details about a variant of the prompt.

        • additionalModelRequestFields (:ref:`document<document>`) --

          Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.

        • inferenceConfiguration (dict) --

          Contains inference configurations for the prompt variant.

          • text (dict) --

            Contains inference configurations for a text prompt.

            • maxTokens (integer) --

              The maximum number of tokens to return in the response.

            • stopSequences (list) --

              A list of strings that define sequences after which the model will stop generating.

              • (string) --

            • temperature (float) --

              Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

            • topP (float) --

              The percentage of most-likely candidates that the model considers for the next token.

        • metadata (list) --

          An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

          • (dict) --

            Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.

            • key (string) --

              The key of a metadata tag for a prompt variant.

            • value (string) --

              The value of a metadata tag for a prompt variant.

        • modelId (string) --

          The unique identifier of the model or inference profile with which to run inference on the prompt.

        • name (string) --

          The name of the prompt variant.

        • templateConfiguration (dict) --

          Contains configurations for the prompt template.

          • text (dict) --

            Contains configurations for the text in a message for a prompt.

            • inputVariables (list) --

              An array of the variables in the prompt template.

              • (dict) --

                Contains information about a variable in the prompt.

                • name (string) --

                  The name of the variable.

            • text (string) --

              The message for the prompt.

        • templateType (string) --

          The type of prompt template to use.

    • version (string) --

      The version of the prompt. When you update a prompt, the version updated is the DRAFT version.