2024/11/07 - Agents for Amazon Bedrock - 1 new9 updated api methods
Changes Add prompt support for chat template configuration and agent generative AI resource. Add support for configuring an optional guardrail in Prompt and Knowledge Base nodes in Prompt Flows. Add API to validate flow definition
Validates the definition of a flow.
See also: AWS API Documentation
Request Syntax
client.validate_flow_definition( definition={ 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {} , 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {} , 'iterator': {} , 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {} , 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {} , 'auto': {} , 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] } )
dict
[REQUIRED]
The definition of a flow to validate.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) -- [REQUIRED]
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) -- [REQUIRED]
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) -- [REQUIRED]
A name for the input that you can reference.
type (string) -- [REQUIRED]
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) -- [REQUIRED]
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
dict
Response Syntax
{ 'validations': [ { 'details': { 'cyclicConnection': { 'connection': 'string' }, 'duplicateConditionExpression': { 'expression': 'string', 'node': 'string' }, 'duplicateConnections': { 'source': 'string', 'target': 'string' }, 'incompatibleConnectionDataType': { 'connection': 'string' }, 'malformedConditionExpression': { 'cause': 'string', 'condition': 'string', 'node': 'string' }, 'malformedNodeInputExpression': { 'cause': 'string', 'input': 'string', 'node': 'string' }, 'mismatchedNodeInputType': { 'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array', 'input': 'string', 'node': 'string' }, 'mismatchedNodeOutputType': { 'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array', 'node': 'string', 'output': 'string' }, 'missingConnectionConfiguration': { 'connection': 'string' }, 'missingDefaultCondition': { 'node': 'string' }, 'missingEndingNodes': {}, 'missingNodeConfiguration': { 'node': 'string' }, 'missingNodeInput': { 'input': 'string', 'node': 'string' }, 'missingNodeOutput': { 'node': 'string', 'output': 'string' }, 'missingStartingNodes': {}, 'multipleNodeInputConnections': { 'input': 'string', 'node': 'string' }, 'unfulfilledNodeInput': { 'input': 'string', 'node': 'string' }, 'unknownConnectionCondition': { 'connection': 'string' }, 'unknownConnectionSource': { 'connection': 'string' }, 'unknownConnectionSourceOutput': { 'connection': 'string' }, 'unknownConnectionTarget': { 'connection': 'string' }, 'unknownConnectionTargetInput': { 'connection': 'string' }, 'unreachableNode': { 'node': 'string' }, 'unsatisfiedConnectionConditions': { 'connection': 'string' }, 'unspecified': {} }, 'message': 'string', 'severity': 'Warning'|'Error', 'type': 'CyclicConnection'|'DuplicateConnections'|'DuplicateConditionExpression'|'UnreachableNode'|'UnknownConnectionSource'|'UnknownConnectionSourceOutput'|'UnknownConnectionTarget'|'UnknownConnectionTargetInput'|'UnknownConnectionCondition'|'MalformedConditionExpression'|'MalformedNodeInputExpression'|'MismatchedNodeInputType'|'MismatchedNodeOutputType'|'IncompatibleConnectionDataType'|'MissingConnectionConfiguration'|'MissingDefaultCondition'|'MissingEndingNodes'|'MissingNodeConfiguration'|'MissingNodeInput'|'MissingNodeOutput'|'MissingStartingNodes'|'MultipleNodeInputConnections'|'UnfulfilledNodeInput'|'UnsatisfiedConnectionConditions'|'Unspecified' }, ] }
Response Structure
(dict) --
validations (list) --
Contains an array of objects, each of which contains an error identified by validation.
(dict) --
Contains information about validation of the flow.
This data type is used in the following API operations:
details (dict) --
Specific details about the validation issue encountered in the flow.
cyclicConnection (dict) --
Details about a cyclic connection in the flow.
connection (string) --
The name of the connection that causes the cycle in the flow.
duplicateConditionExpression (dict) --
Details about duplicate condition expressions in a node.
expression (string) --
The duplicated condition expression.
node (string) --
The name of the node containing the duplicate condition expressions.
duplicateConnections (dict) --
Details about duplicate connections between nodes.
source (string) --
The name of the source node where the duplicate connection starts.
target (string) --
The name of the target node where the duplicate connection ends.
incompatibleConnectionDataType (dict) --
Details about incompatible data types in a connection.
connection (string) --
The name of the connection with incompatible data types.
malformedConditionExpression (dict) --
Details about a malformed condition expression in a node.
cause (string) --
The error message describing why the condition expression is malformed.
condition (string) --
The name of the malformed condition.
node (string) --
The name of the node containing the malformed condition expression.
malformedNodeInputExpression (dict) --
Details about a malformed input expression in a node.
cause (string) --
The error message describing why the input expression is malformed.
input (string) --
The name of the input with the malformed expression.
node (string) --
The name of the node containing the malformed input expression.
mismatchedNodeInputType (dict) --
Details about mismatched input data types in a node.
expectedType (string) --
The expected data type for the node input.
input (string) --
The name of the input with the mismatched data type.
node (string) --
The name of the node containing the input with the mismatched data type.
mismatchedNodeOutputType (dict) --
Details about mismatched output data types in a node.
expectedType (string) --
The expected data type for the node output.
node (string) --
The name of the node containing the output with the mismatched data type.
output (string) --
The name of the output with the mismatched data type.
missingConnectionConfiguration (dict) --
Details about missing configuration for a connection.
connection (string) --
The name of the connection missing configuration.
missingDefaultCondition (dict) --
Details about a missing default condition in a conditional node.
node (string) --
The name of the node missing the default condition.
missingEndingNodes (dict) --
Details about missing ending nodes in the flow.
missingNodeConfiguration (dict) --
Details about missing configuration for a node.
node (string) --
The name of the node missing configuration.
missingNodeInput (dict) --
Details about a missing required input in a node.
input (string) --
The name of the missing input.
node (string) --
The name of the node missing the required input.
missingNodeOutput (dict) --
Details about a missing required output in a node.
node (string) --
The name of the node missing the required output.
output (string) --
The name of the missing output.
missingStartingNodes (dict) --
Details about missing starting nodes in the flow.
multipleNodeInputConnections (dict) --
Details about multiple connections to a single node input.
input (string) --
The name of the input with multiple connections to it.
node (string) --
The name of the node containing the input with multiple connections.
unfulfilledNodeInput (dict) --
Details about an unfulfilled node input with no valid connections.
input (string) --
The name of the unfulfilled input. An input is unfulfilled if there are no data connections to it.
node (string) --
The name of the node containing the unfulfilled input.
unknownConnectionCondition (dict) --
Details about an unknown condition for a connection.
connection (string) --
The name of the connection with the unknown condition.
unknownConnectionSource (dict) --
Details about an unknown source node for a connection.
connection (string) --
The name of the connection with the unknown source.
unknownConnectionSourceOutput (dict) --
Details about an unknown source output for a connection.
connection (string) --
The name of the connection with the unknown source output.
unknownConnectionTarget (dict) --
Details about an unknown target node for a connection.
connection (string) --
The name of the connection with the unknown target.
unknownConnectionTargetInput (dict) --
Details about an unknown target input for a connection.
connection (string) --
The name of the connection with the unknown target input.
unreachableNode (dict) --
Details about an unreachable node in the flow.
node (string) --
The name of the unreachable node.
unsatisfiedConnectionConditions (dict) --
Details about unsatisfied conditions for a connection.
connection (string) --
The name of the connection with unsatisfied conditions.
unspecified (dict) --
Details about an unspecified validation.
message (string) --
A message describing the validation error.
severity (string) --
The severity of the issue described in the message.
type (string) --
The type of validation issue encountered in the flow.
{'definition': {'nodes': {'configuration': {'knowledgeBase': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}}, 'prompt': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}, 'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user ' '| ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}}}}}}
Creates a prompt flow that you can use to send an input through various steps to yield an output. Configure nodes, each of which corresponds to a step of the flow, and create connections between the nodes to create paths to different outputs. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_flow( clientToken='string', customerEncryptionKeyArn='string', definition={ 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {} , 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {} , 'iterator': {} , 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {} , 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {} , 'auto': {} , 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, description='string', executionRoleArn='string', name='string', tags={ 'string': 'string' } )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.
dict
A definition of the nodes and connections between nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) -- [REQUIRED]
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) -- [REQUIRED]
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) -- [REQUIRED]
A name for the input that you can reference.
type (string) -- [REQUIRED]
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) -- [REQUIRED]
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
string
A description for the flow.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
string
[REQUIRED]
A name for the flow.
dict
Any tags that you want to attach to the flow. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'definition': { 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {}, 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {}, 'iterator': {}, 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {}, 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, 'description': 'string', 'executionRoleArn': 'string', 'id': 'string', 'name': 'string', 'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared', 'updatedAt': datetime(2015, 1, 1), 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the flow.
createdAt (datetime) --
The time at which the flow was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that you encrypted the flow with.
definition (dict) --
A definition of the nodes and connections between nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) --
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) --
A name for the input that you can reference.
type (string) --
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) --
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
id (string) --
The unique identifier of the flow.
name (string) --
The name of the flow.
status (string) --
The status of the flow. When you submit this request, the status will be NotPrepared. If creation fails, the status becomes Failed.
updatedAt (datetime) --
The time at which the flow was last updated.
version (string) --
The version of the flow. When you create a flow, the version created is the DRAFT version.
{'definition': {'nodes': {'configuration': {'knowledgeBase': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}}, 'prompt': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}, 'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user ' '| ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}}}}}}
Creates a version of the flow that you can deploy. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_flow_version( clientToken='string', description='string', flowIdentifier='string' )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
A description of the version of the flow.
string
[REQUIRED]
The unique identifier of the flow that you want to create a version of.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'definition': { 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {}, 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {}, 'iterator': {}, 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {}, 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, 'description': 'string', 'executionRoleArn': 'string', 'id': 'string', 'name': 'string', 'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared', 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the flow.
createdAt (datetime) --
The time at which the flow was created.
customerEncryptionKeyArn (string) --
The KMS key that the flow is encrypted with.
definition (dict) --
A definition of the nodes and connections in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) --
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) --
A name for the input that you can reference.
type (string) --
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) --
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
description (string) --
The description of the version.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
id (string) --
The unique identifier of the flow.
name (string) --
The name of the version.
status (string) --
The status of the flow.
version (string) --
The version of the flow that was created. Versions are numbered incrementally, starting from 1.
{'variants': {'genAiResource': {'agent': {'agentIdentifier': 'string'}}, 'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user | ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}
Creates a prompt in your prompt library that you can add to a flow. For more information, see Prompt management in Amazon Bedrock, Create a prompt using Prompt management and Prompt flows in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt( clientToken='string', customerEncryptionKeyArn='string', defaultVariant='string', description='string', name='string', tags={ 'string': 'string' }, variants=[ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {} , 'auto': {} , 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ] )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
string
A description for the prompt.
string
[REQUIRED]
A name for the prompt.
dict
Any tags that you want to attach to the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) -- [REQUIRED]
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateConfiguration (dict) -- [REQUIRED]
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) -- [REQUIRED]
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) -- [REQUIRED]
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that you encrypted the prompt with.
defaultVariant (string) --
The name of the default variant for your prompt.
description (string) --
The description of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt. When you create a prompt, the version created is the DRAFT version.
{'variants': {'genAiResource': {'agent': {'agentIdentifier': 'string'}}, 'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user | ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}
Creates a static snapshot of your prompt that can be deployed to production. For more information, see Deploy prompts using Prompt management by creating versions in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt_version( clientToken='string', description='string', promptIdentifier='string', tags={ 'string': 'string' } )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
A description for the version of the prompt.
string
[REQUIRED]
The unique identifier of the prompt that you want to create a version of.
dict
Any tags that you want to attach to the version of the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the version of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the version of the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
A description for the version.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt that was created. Versions are numbered incrementally, starting from 1.
{'definition': {'nodes': {'configuration': {'knowledgeBase': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}}, 'prompt': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}, 'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user ' '| ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}}}}}, 'validations': {'details': {'cyclicConnection': {'connection': 'string'}, 'duplicateConditionExpression': {'expression': 'string', 'node': 'string'}, 'duplicateConnections': {'source': 'string', 'target': 'string'}, 'incompatibleConnectionDataType': {'connection': 'string'}, 'malformedConditionExpression': {'cause': 'string', 'condition': 'string', 'node': 'string'}, 'malformedNodeInputExpression': {'cause': 'string', 'input': 'string', 'node': 'string'}, 'mismatchedNodeInputType': {'expectedType': 'String ' '| ' 'Number ' '| ' 'Boolean ' '| ' 'Object ' '| ' 'Array', 'input': 'string', 'node': 'string'}, 'mismatchedNodeOutputType': {'expectedType': 'String ' '| ' 'Number ' '| ' 'Boolean ' '| ' 'Object ' '| ' 'Array', 'node': 'string', 'output': 'string'}, 'missingConnectionConfiguration': {'connection': 'string'}, 'missingDefaultCondition': {'node': 'string'}, 'missingEndingNodes': {}, 'missingNodeConfiguration': {'node': 'string'}, 'missingNodeInput': {'input': 'string', 'node': 'string'}, 'missingNodeOutput': {'node': 'string', 'output': 'string'}, 'missingStartingNodes': {}, 'multipleNodeInputConnections': {'input': 'string', 'node': 'string'}, 'unfulfilledNodeInput': {'input': 'string', 'node': 'string'}, 'unknownConnectionCondition': {'connection': 'string'}, 'unknownConnectionSource': {'connection': 'string'}, 'unknownConnectionSourceOutput': {'connection': 'string'}, 'unknownConnectionTarget': {'connection': 'string'}, 'unknownConnectionTargetInput': {'connection': 'string'}, 'unreachableNode': {'node': 'string'}, 'unsatisfiedConnectionConditions': {'connection': 'string'}, 'unspecified': {}}, 'type': 'CyclicConnection | DuplicateConnections | ' 'DuplicateConditionExpression | UnreachableNode | ' 'UnknownConnectionSource | ' 'UnknownConnectionSourceOutput | ' 'UnknownConnectionTarget | ' 'UnknownConnectionTargetInput | ' 'UnknownConnectionCondition | ' 'MalformedConditionExpression | ' 'MalformedNodeInputExpression | ' 'MismatchedNodeInputType | MismatchedNodeOutputType | ' 'IncompatibleConnectionDataType | ' 'MissingConnectionConfiguration | ' 'MissingDefaultCondition | MissingEndingNodes | ' 'MissingNodeConfiguration | MissingNodeInput | ' 'MissingNodeOutput | MissingStartingNodes | ' 'MultipleNodeInputConnections | UnfulfilledNodeInput ' '| UnsatisfiedConnectionConditions | Unspecified'}}
Retrieves information about a flow. For more information, see Manage a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_flow( flowIdentifier='string' )
string
[REQUIRED]
The unique identifier of the flow.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'definition': { 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {}, 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {}, 'iterator': {}, 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {}, 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, 'description': 'string', 'executionRoleArn': 'string', 'id': 'string', 'name': 'string', 'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared', 'updatedAt': datetime(2015, 1, 1), 'validations': [ { 'details': { 'cyclicConnection': { 'connection': 'string' }, 'duplicateConditionExpression': { 'expression': 'string', 'node': 'string' }, 'duplicateConnections': { 'source': 'string', 'target': 'string' }, 'incompatibleConnectionDataType': { 'connection': 'string' }, 'malformedConditionExpression': { 'cause': 'string', 'condition': 'string', 'node': 'string' }, 'malformedNodeInputExpression': { 'cause': 'string', 'input': 'string', 'node': 'string' }, 'mismatchedNodeInputType': { 'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array', 'input': 'string', 'node': 'string' }, 'mismatchedNodeOutputType': { 'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array', 'node': 'string', 'output': 'string' }, 'missingConnectionConfiguration': { 'connection': 'string' }, 'missingDefaultCondition': { 'node': 'string' }, 'missingEndingNodes': {}, 'missingNodeConfiguration': { 'node': 'string' }, 'missingNodeInput': { 'input': 'string', 'node': 'string' }, 'missingNodeOutput': { 'node': 'string', 'output': 'string' }, 'missingStartingNodes': {}, 'multipleNodeInputConnections': { 'input': 'string', 'node': 'string' }, 'unfulfilledNodeInput': { 'input': 'string', 'node': 'string' }, 'unknownConnectionCondition': { 'connection': 'string' }, 'unknownConnectionSource': { 'connection': 'string' }, 'unknownConnectionSourceOutput': { 'connection': 'string' }, 'unknownConnectionTarget': { 'connection': 'string' }, 'unknownConnectionTargetInput': { 'connection': 'string' }, 'unreachableNode': { 'node': 'string' }, 'unsatisfiedConnectionConditions': { 'connection': 'string' }, 'unspecified': {} }, 'message': 'string', 'severity': 'Warning'|'Error', 'type': 'CyclicConnection'|'DuplicateConnections'|'DuplicateConditionExpression'|'UnreachableNode'|'UnknownConnectionSource'|'UnknownConnectionSourceOutput'|'UnknownConnectionTarget'|'UnknownConnectionTargetInput'|'UnknownConnectionCondition'|'MalformedConditionExpression'|'MalformedNodeInputExpression'|'MismatchedNodeInputType'|'MismatchedNodeOutputType'|'IncompatibleConnectionDataType'|'MissingConnectionConfiguration'|'MissingDefaultCondition'|'MissingEndingNodes'|'MissingNodeConfiguration'|'MissingNodeInput'|'MissingNodeOutput'|'MissingStartingNodes'|'MultipleNodeInputConnections'|'UnfulfilledNodeInput'|'UnsatisfiedConnectionConditions'|'Unspecified' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the flow.
createdAt (datetime) --
The time at which the flow was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the flow is encrypted with.
definition (dict) --
The definition of the nodes and connections between the nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) --
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) --
A name for the input that you can reference.
type (string) --
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) --
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service row for flows in the Amazon Bedrock User Guide.
id (string) --
The unique identifier of the flow.
name (string) --
The name of the flow.
status (string) --
The status of the flow. The following statuses are possible:
NotPrepared – The flow has been created or updated, but hasn't been prepared. If you just created the flow, you can't test it. If you updated the flow, the DRAFT version won't contain the latest changes for testing. Send a PrepareFlow request to package the latest changes into the DRAFT version.
Preparing – The flow is being prepared so that the DRAFT version contains the latest changes for testing.
Prepared – The flow is prepared and the DRAFT version contains the latest changes for testing.
Failed – The last API operation that you invoked on the flow failed. Send a GetFlow request and check the error message in the validations field.
updatedAt (datetime) --
The time at which the flow was last updated.
validations (list) --
A list of validation error messages related to the last failed operation on the flow.
(dict) --
Contains information about validation of the flow.
This data type is used in the following API operations:
details (dict) --
Specific details about the validation issue encountered in the flow.
cyclicConnection (dict) --
Details about a cyclic connection in the flow.
connection (string) --
The name of the connection that causes the cycle in the flow.
duplicateConditionExpression (dict) --
Details about duplicate condition expressions in a node.
expression (string) --
The duplicated condition expression.
node (string) --
The name of the node containing the duplicate condition expressions.
duplicateConnections (dict) --
Details about duplicate connections between nodes.
source (string) --
The name of the source node where the duplicate connection starts.
target (string) --
The name of the target node where the duplicate connection ends.
incompatibleConnectionDataType (dict) --
Details about incompatible data types in a connection.
connection (string) --
The name of the connection with incompatible data types.
malformedConditionExpression (dict) --
Details about a malformed condition expression in a node.
cause (string) --
The error message describing why the condition expression is malformed.
condition (string) --
The name of the malformed condition.
node (string) --
The name of the node containing the malformed condition expression.
malformedNodeInputExpression (dict) --
Details about a malformed input expression in a node.
cause (string) --
The error message describing why the input expression is malformed.
input (string) --
The name of the input with the malformed expression.
node (string) --
The name of the node containing the malformed input expression.
mismatchedNodeInputType (dict) --
Details about mismatched input data types in a node.
expectedType (string) --
The expected data type for the node input.
input (string) --
The name of the input with the mismatched data type.
node (string) --
The name of the node containing the input with the mismatched data type.
mismatchedNodeOutputType (dict) --
Details about mismatched output data types in a node.
expectedType (string) --
The expected data type for the node output.
node (string) --
The name of the node containing the output with the mismatched data type.
output (string) --
The name of the output with the mismatched data type.
missingConnectionConfiguration (dict) --
Details about missing configuration for a connection.
connection (string) --
The name of the connection missing configuration.
missingDefaultCondition (dict) --
Details about a missing default condition in a conditional node.
node (string) --
The name of the node missing the default condition.
missingEndingNodes (dict) --
Details about missing ending nodes in the flow.
missingNodeConfiguration (dict) --
Details about missing configuration for a node.
node (string) --
The name of the node missing configuration.
missingNodeInput (dict) --
Details about a missing required input in a node.
input (string) --
The name of the missing input.
node (string) --
The name of the node missing the required input.
missingNodeOutput (dict) --
Details about a missing required output in a node.
node (string) --
The name of the node missing the required output.
output (string) --
The name of the missing output.
missingStartingNodes (dict) --
Details about missing starting nodes in the flow.
multipleNodeInputConnections (dict) --
Details about multiple connections to a single node input.
input (string) --
The name of the input with multiple connections to it.
node (string) --
The name of the node containing the input with multiple connections.
unfulfilledNodeInput (dict) --
Details about an unfulfilled node input with no valid connections.
input (string) --
The name of the unfulfilled input. An input is unfulfilled if there are no data connections to it.
node (string) --
The name of the node containing the unfulfilled input.
unknownConnectionCondition (dict) --
Details about an unknown condition for a connection.
connection (string) --
The name of the connection with the unknown condition.
unknownConnectionSource (dict) --
Details about an unknown source node for a connection.
connection (string) --
The name of the connection with the unknown source.
unknownConnectionSourceOutput (dict) --
Details about an unknown source output for a connection.
connection (string) --
The name of the connection with the unknown source output.
unknownConnectionTarget (dict) --
Details about an unknown target node for a connection.
connection (string) --
The name of the connection with the unknown target.
unknownConnectionTargetInput (dict) --
Details about an unknown target input for a connection.
connection (string) --
The name of the connection with the unknown target input.
unreachableNode (dict) --
Details about an unreachable node in the flow.
node (string) --
The name of the unreachable node.
unsatisfiedConnectionConditions (dict) --
Details about unsatisfied conditions for a connection.
connection (string) --
The name of the connection with unsatisfied conditions.
unspecified (dict) --
Details about an unspecified validation.
message (string) --
A message describing the validation error.
severity (string) --
The severity of the issue described in the message.
type (string) --
The type of validation issue encountered in the flow.
version (string) --
The version of the flow for which information was retrieved.
{'definition': {'nodes': {'configuration': {'knowledgeBase': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}}, 'prompt': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}, 'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user ' '| ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}}}}}}
Retrieves information about a version of a flow. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_flow_version( flowIdentifier='string', flowVersion='string' )
string
[REQUIRED]
The unique identifier of the flow for which to get information.
string
[REQUIRED]
The version of the flow for which to get information.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'definition': { 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {}, 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {}, 'iterator': {}, 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {}, 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, 'description': 'string', 'executionRoleArn': 'string', 'id': 'string', 'name': 'string', 'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared', 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the flow.
createdAt (datetime) --
The time at which the flow was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the version of the flow is encrypted with.
definition (dict) --
The definition of the nodes and connections between nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) --
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) --
A name for the input that you can reference.
type (string) --
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) --
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
id (string) --
The unique identifier of the flow.
name (string) --
The name of the version.
status (string) --
The status of the flow.
version (string) --
The version of the flow for which information was retrieved.
{'variants': {'genAiResource': {'agent': {'agentIdentifier': 'string'}}, 'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user | ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}
Retrieves information about the working draft ( DRAFT version) of a prompt or a version of it, depending on whether you include the promptVersion field or not. For more information, see View information about prompts using Prompt management and View information about a version of your prompt in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_prompt( promptIdentifier='string', promptVersion='string' )
string
[REQUIRED]
The unique identifier of the prompt.
string
The version of the prompt about which you want to retrieve information. Omit this field to return information about the working draft of the prompt.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt or the prompt version (if you specified a version in the request).
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the prompt is encrypted with.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
The descriptino of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt.
{'definition': {'nodes': {'configuration': {'knowledgeBase': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}}, 'prompt': {'guardrailConfiguration': {'guardrailIdentifier': 'string', 'guardrailVersion': 'string'}, 'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user ' '| ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}}}}}}
Modifies a flow. Include both fields that you want to keep and fields that you want to change. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.update_flow( customerEncryptionKeyArn='string', definition={ 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {} , 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {} , 'iterator': {} , 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {} , 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {} , 'auto': {} , 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, description='string', executionRoleArn='string', flowIdentifier='string', name='string' )
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.
dict
A definition of the nodes and the connections between the nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) -- [REQUIRED]
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) -- [REQUIRED]
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) -- [REQUIRED]
A name for the input that you can reference.
type (string) -- [REQUIRED]
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) -- [REQUIRED]
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
string
A description for the flow.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
string
[REQUIRED]
The unique identifier of the flow.
string
[REQUIRED]
A name for the flow.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'definition': { 'connections': [ { 'configuration': { 'conditional': { 'condition': 'string' }, 'data': { 'sourceOutput': 'string', 'targetInput': 'string' } }, 'name': 'string', 'source': 'string', 'target': 'string', 'type': 'Data'|'Conditional' }, ], 'nodes': [ { 'configuration': { 'agent': { 'agentAliasArn': 'string' }, 'collector': {}, 'condition': { 'conditions': [ { 'expression': 'string', 'name': 'string' }, ] }, 'input': {}, 'iterator': {}, 'knowledgeBase': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'knowledgeBaseId': 'string', 'modelId': 'string' }, 'lambdaFunction': { 'lambdaArn': 'string' }, 'lex': { 'botAliasArn': 'string', 'localeId': 'string' }, 'output': {}, 'prompt': { 'guardrailConfiguration': { 'guardrailIdentifier': 'string', 'guardrailVersion': 'string' }, 'sourceConfiguration': { 'inline': { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'modelId': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, 'resource': { 'promptArn': 'string' } } }, 'retrieval': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } }, 'storage': { 'serviceConfiguration': { 's3': { 'bucketName': 'string' } } } }, 'inputs': [ { 'expression': 'string', 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'name': 'string', 'outputs': [ { 'name': 'string', 'type': 'String'|'Number'|'Boolean'|'Object'|'Array' }, ], 'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector' }, ] }, 'description': 'string', 'executionRoleArn': 'string', 'id': 'string', 'name': 'string', 'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared', 'updatedAt': datetime(2015, 1, 1), 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the flow.
createdAt (datetime) --
The time at which the flow was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the flow was encrypted with.
definition (dict) --
A definition of the nodes and the connections between nodes in the flow.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
configuration (dict) --
The configuration of the connection.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
configuration (dict) --
Contains configurations for the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
condition (dict) --
Contains configurations for a Condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
name (string) --
A name for the condition that you can reference.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
inline (dict) --
Contains configurations for a prompt that is defined inline
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
retrieval (dict) --
Contains configurations for a Retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
storage (dict) --
Contains configurations for a Storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input to a node.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
name (string) --
A name for the input that you can reference.
type (string) --
The data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
name (string) --
A name for the node.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
id (string) --
The unique identifier of the flow.
name (string) --
The name of the flow.
status (string) --
The status of the flow. When you submit this request, the status will be NotPrepared. If updating fails, the status becomes Failed.
updatedAt (datetime) --
The time at which the flow was last updated.
version (string) --
The version of the flow. When you update a flow, the version updated is the DRAFT version.
{'variants': {'genAiResource': {'agent': {'agentIdentifier': 'string'}}, 'templateConfiguration': {'chat': {'inputVariables': [{'name': 'string'}], 'messages': [{'content': [{'text': 'string'}], 'role': 'user | ' 'assistant'}], 'system': [{'text': 'string'}], 'toolConfiguration': {'toolChoice': {'any': {}, 'auto': {}, 'tool': {'name': 'string'}}, 'tools': [{'toolSpec': {'description': 'string', 'inputSchema': {'json': {}}, 'name': 'string'}}]}}}, 'templateType': {'CHAT'}}}
Modifies a prompt in your prompt library. Include both fields that you want to keep and fields that you want to replace. For more information, see Prompt management in Amazon Bedrock and Edit prompts in your prompt library in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.update_prompt( customerEncryptionKeyArn='string', defaultVariant='string', description='string', name='string', promptIdentifier='string', variants=[ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {} , 'auto': {} , 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ] )
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
string
A description for the prompt.
string
[REQUIRED]
A name for the prompt.
string
[REQUIRED]
The unique identifier of the prompt.
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) -- [REQUIRED]
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateConfiguration (dict) -- [REQUIRED]
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) -- [REQUIRED]
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) -- [REQUIRED]
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None, 'genAiResource': { 'agent': { 'agentIdentifier': 'string' } }, 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'chat': { 'inputVariables': [ { 'name': 'string' }, ], 'messages': [ { 'content': [ { 'text': 'string' }, ], 'role': 'user'|'assistant' }, ], 'system': [ { 'text': 'string' }, ], 'toolConfiguration': { 'toolChoice': { 'any': {}, 'auto': {}, 'tool': { 'name': 'string' } }, 'tools': [ { 'toolSpec': { 'description': 'string', 'inputSchema': { 'json': {...}|[...]|123|123.4|'string'|True|None }, 'name': 'string' } }, ] } }, 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT'|'CHAT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
The description of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
role (string) --
The role that the message belongs to.
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
toolChoice (dict) --
Defines which tools the model should request when invoked.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
name (string) --
The name of the tool.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt. When you update a prompt, the version updated is the DRAFT version.