2026/04/03 - Agents for Amazon Bedrock - 10 updated api methods
Changes Amazon Bedrock Guardrails enforcement configuration APIs now support selective guarding controls for system prompts as well as user and assistant messages, along with SDK support for Amazon Bedrock resource policy APIs.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Creates a prompt flow that you can use to send an input through various steps to yield an output. Configure nodes, each of which corresponds to a step of the flow, and create connections between the nodes to create paths to different outputs. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_flow(
name='string',
description='string',
executionRoleArn='string',
customerEncryptionKeyArn='string',
definition={
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {}
,
'output': {}
,
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {}
,
'any': {}
,
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {}
,
'collector': {}
,
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {}
,
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
},
clientToken='string',
tags={
'string': 'string'
}
)
string
[REQUIRED]
A name for the flow.
string
A description for the flow.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.
dict
A definition of the nodes and connections between nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) -- [REQUIRED]
A name for the node.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) -- [REQUIRED]
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) -- [REQUIRED]
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) -- [REQUIRED]
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) -- [REQUIRED]
The type of prompt template.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) -- [REQUIRED]
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) -- [REQUIRED]
The role that the message belongs to.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) -- [REQUIRED]
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) -- [REQUIRED]
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) -- [REQUIRED]
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) -- [REQUIRED]
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) -- [REQUIRED]
Specifies a name for the input that you can reference.
type (string) -- [REQUIRED]
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
dict
Any tags that you want to attach to the flow. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'executionRoleArn': 'string',
'customerEncryptionKeyArn': 'string',
'id': 'string',
'arn': 'string',
'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'version': 'string',
'definition': {
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {},
'output': {},
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {},
'collector': {},
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {},
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
}
}
Response Structure
(dict) --
name (string) --
The name of the flow.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that you encrypted the flow with.
id (string) --
The unique identifier of the flow.
arn (string) --
The Amazon Resource Name (ARN) of the flow.
status (string) --
The status of the flow. When you submit this request, the status will be NotPrepared. If creation fails, the status becomes Failed.
createdAt (datetime) --
The time at which the flow was created.
updatedAt (datetime) --
The time at which the flow was last updated.
version (string) --
The version of the flow. When you create a flow, the version created is the DRAFT version.
definition (dict) --
A definition of the nodes and connections between nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) --
A name for the node.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) --
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) --
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) --
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) --
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) --
The type of prompt template.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) --
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) --
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) --
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) --
Specifies a name for the input that you can reference.
type (string) --
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Creates a version of the flow that you can deploy. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_flow_version(
flowIdentifier='string',
description='string',
clientToken='string'
)
string
[REQUIRED]
The unique identifier of the flow that you want to create a version of.
string
A description of the version of the flow.
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'executionRoleArn': 'string',
'customerEncryptionKeyArn': 'string',
'id': 'string',
'arn': 'string',
'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
'createdAt': datetime(2015, 1, 1),
'version': 'string',
'definition': {
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {},
'output': {},
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {},
'collector': {},
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {},
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
}
}
Response Structure
(dict) --
name (string) --
The name of the version.
description (string) --
The description of the version.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
customerEncryptionKeyArn (string) --
The KMS key that the flow is encrypted with.
id (string) --
The unique identifier of the flow.
arn (string) --
The Amazon Resource Name (ARN) of the flow.
status (string) --
The status of the flow.
createdAt (datetime) --
The time at which the flow was created.
version (string) --
The version of the flow that was created. Versions are numbered incrementally, starting from 1.
definition (dict) --
A definition of the nodes and connections in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) --
A name for the node.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) --
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) --
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) --
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) --
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) --
The type of prompt template.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) --
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) --
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) --
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) --
Specifies a name for the input that you can reference.
type (string) --
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
{'variants': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}
Creates a prompt in your prompt library that you can add to a flow. For more information, see Prompt management in Amazon Bedrock, Create a prompt using Prompt management and Prompt flows in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt(
name='string',
description='string',
customerEncryptionKeyArn='string',
defaultVariant='string',
variants=[
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {}
,
'any': {}
,
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
clientToken='string',
tags={
'string': 'string'
}
)
string
[REQUIRED]
A name for the prompt.
string
A description for the prompt.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
templateConfiguration (dict) -- [REQUIRED]
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) -- [REQUIRED]
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) -- [REQUIRED]
The role that the message belongs to.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) -- [REQUIRED]
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) -- [REQUIRED]
The ARN of the agent with which to use the prompt.
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
dict
Any tags that you want to attach to the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'customerEncryptionKeyArn': 'string',
'defaultVariant': 'string',
'variants': [
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
'id': 'string',
'arn': 'string',
'version': 'string',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --
The name of the prompt.
description (string) --
The description of the prompt.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that you encrypted the prompt with.
defaultVariant (string) --
The name of the default variant for your prompt.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) --
The name of the prompt variant.
templateType (string) --
The type of prompt template to use.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
id (string) --
The unique identifier of the prompt.
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
version (string) --
The version of the prompt. When you create a prompt, the version created is the DRAFT version.
createdAt (datetime) --
The time at which the prompt was created.
updatedAt (datetime) --
The time at which the prompt was last updated.
{'variants': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}
Creates a static snapshot of your prompt that can be deployed to production. For more information, see Deploy prompts using Prompt management by creating versions in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt_version(
promptIdentifier='string',
description='string',
clientToken='string',
tags={
'string': 'string'
}
)
string
[REQUIRED]
The unique identifier of the prompt that you want to create a version of.
string
A description for the version of the prompt.
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
dict
Any tags that you want to attach to the version of the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'customerEncryptionKeyArn': 'string',
'defaultVariant': 'string',
'variants': [
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
'id': 'string',
'arn': 'string',
'version': 'string',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --
The name of the prompt.
description (string) --
A description for the version.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the version of the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) --
The name of the prompt variant.
templateType (string) --
The type of prompt template to use.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
id (string) --
The unique identifier of the prompt.
arn (string) --
The Amazon Resource Name (ARN) of the version of the prompt.
version (string) --
The version of the prompt that was created. Versions are numbered incrementally, starting from 1.
createdAt (datetime) --
The time at which the prompt was created.
updatedAt (datetime) --
The time at which the prompt was last updated.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Retrieves information about a flow. For more information, see Manage a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_flow(
flowIdentifier='string'
)
string
[REQUIRED]
The unique identifier of the flow.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'executionRoleArn': 'string',
'customerEncryptionKeyArn': 'string',
'id': 'string',
'arn': 'string',
'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'version': 'string',
'definition': {
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {},
'output': {},
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {},
'collector': {},
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {},
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
},
'validations': [
{
'message': 'string',
'severity': 'Warning'|'Error',
'details': {
'cyclicConnection': {
'connection': 'string'
},
'duplicateConnections': {
'source': 'string',
'target': 'string'
},
'duplicateConditionExpression': {
'node': 'string',
'expression': 'string'
},
'unreachableNode': {
'node': 'string'
},
'unknownConnectionSource': {
'connection': 'string'
},
'unknownConnectionSourceOutput': {
'connection': 'string'
},
'unknownConnectionTarget': {
'connection': 'string'
},
'unknownConnectionTargetInput': {
'connection': 'string'
},
'unknownConnectionCondition': {
'connection': 'string'
},
'malformedConditionExpression': {
'node': 'string',
'condition': 'string',
'cause': 'string'
},
'malformedNodeInputExpression': {
'node': 'string',
'input': 'string',
'cause': 'string'
},
'mismatchedNodeInputType': {
'node': 'string',
'input': 'string',
'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
'mismatchedNodeOutputType': {
'node': 'string',
'output': 'string',
'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
'incompatibleConnectionDataType': {
'connection': 'string'
},
'missingConnectionConfiguration': {
'connection': 'string'
},
'missingDefaultCondition': {
'node': 'string'
},
'missingEndingNodes': {},
'missingNodeConfiguration': {
'node': 'string'
},
'missingNodeInput': {
'node': 'string',
'input': 'string'
},
'missingNodeOutput': {
'node': 'string',
'output': 'string'
},
'missingStartingNodes': {},
'multipleNodeInputConnections': {
'node': 'string',
'input': 'string'
},
'unfulfilledNodeInput': {
'node': 'string',
'input': 'string'
},
'unsatisfiedConnectionConditions': {
'connection': 'string'
},
'unspecified': {},
'unknownNodeInput': {
'node': 'string',
'input': 'string'
},
'unknownNodeOutput': {
'node': 'string',
'output': 'string'
},
'missingLoopInputNode': {
'loopNode': 'string'
},
'missingLoopControllerNode': {
'loopNode': 'string'
},
'multipleLoopInputNodes': {
'loopNode': 'string'
},
'multipleLoopControllerNodes': {
'loopNode': 'string'
},
'loopIncompatibleNodeType': {
'node': 'string',
'incompatibleNodeType': 'Input'|'Condition'|'Iterator'|'Collector',
'incompatibleNodeName': 'string'
},
'invalidLoopBoundary': {
'connection': 'string',
'source': 'string',
'target': 'string'
}
},
'type': 'CyclicConnection'|'DuplicateConnections'|'DuplicateConditionExpression'|'UnreachableNode'|'UnknownConnectionSource'|'UnknownConnectionSourceOutput'|'UnknownConnectionTarget'|'UnknownConnectionTargetInput'|'UnknownConnectionCondition'|'MalformedConditionExpression'|'MalformedNodeInputExpression'|'MismatchedNodeInputType'|'MismatchedNodeOutputType'|'IncompatibleConnectionDataType'|'MissingConnectionConfiguration'|'MissingDefaultCondition'|'MissingEndingNodes'|'MissingNodeConfiguration'|'MissingNodeInput'|'MissingNodeOutput'|'MissingStartingNodes'|'MultipleNodeInputConnections'|'UnfulfilledNodeInput'|'UnsatisfiedConnectionConditions'|'Unspecified'|'UnknownNodeInput'|'UnknownNodeOutput'|'MissingLoopInputNode'|'MissingLoopControllerNode'|'MultipleLoopInputNodes'|'MultipleLoopControllerNodes'|'LoopIncompatibleNodeType'|'InvalidLoopBoundary'
},
]
}
Response Structure
(dict) --
name (string) --
The name of the flow.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service row for flows in the Amazon Bedrock User Guide.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the flow is encrypted with.
id (string) --
The unique identifier of the flow.
arn (string) --
The Amazon Resource Name (ARN) of the flow.
status (string) --
The status of the flow. The following statuses are possible:
NotPrepared β The flow has been created or updated, but hasn't been prepared. If you just created the flow, you can't test it. If you updated the flow, the DRAFT version won't contain the latest changes for testing. Send a PrepareFlow request to package the latest changes into the DRAFT version.
Preparing β The flow is being prepared so that the DRAFT version contains the latest changes for testing.
Prepared β The flow is prepared and the DRAFT version contains the latest changes for testing.
Failed β The last API operation that you invoked on the flow failed. Send a GetFlow request and check the error message in the validations field.
createdAt (datetime) --
The time at which the flow was created.
updatedAt (datetime) --
The time at which the flow was last updated.
version (string) --
The version of the flow for which information was retrieved.
definition (dict) --
The definition of the nodes and connections between the nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) --
A name for the node.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) --
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) --
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) --
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) --
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) --
The type of prompt template.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) --
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) --
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) --
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) --
Specifies a name for the input that you can reference.
type (string) --
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
validations (list) --
A list of validation error messages related to the last failed operation on the flow.
(dict) --
Contains information about validation of the flow.
This data type is used in the following API operations:
message (string) --
A message describing the validation error.
severity (string) --
The severity of the issue described in the message.
details (dict) --
Specific details about the validation issue encountered in the flow.
cyclicConnection (dict) --
Details about a cyclic connection in the flow.
connection (string) --
The name of the connection that causes the cycle in the flow.
duplicateConnections (dict) --
Details about duplicate connections between nodes.
source (string) --
The name of the source node where the duplicate connection starts.
target (string) --
The name of the target node where the duplicate connection ends.
duplicateConditionExpression (dict) --
Details about duplicate condition expressions in a node.
node (string) --
The name of the node containing the duplicate condition expressions.
expression (string) --
The duplicated condition expression.
unreachableNode (dict) --
Details about an unreachable node in the flow.
node (string) --
The name of the unreachable node.
unknownConnectionSource (dict) --
Details about an unknown source node for a connection.
connection (string) --
The name of the connection with the unknown source.
unknownConnectionSourceOutput (dict) --
Details about an unknown source output for a connection.
connection (string) --
The name of the connection with the unknown source output.
unknownConnectionTarget (dict) --
Details about an unknown target node for a connection.
connection (string) --
The name of the connection with the unknown target.
unknownConnectionTargetInput (dict) --
Details about an unknown target input for a connection.
connection (string) --
The name of the connection with the unknown target input.
unknownConnectionCondition (dict) --
Details about an unknown condition for a connection.
connection (string) --
The name of the connection with the unknown condition.
malformedConditionExpression (dict) --
Details about a malformed condition expression in a node.
node (string) --
The name of the node containing the malformed condition expression.
condition (string) --
The name of the malformed condition.
cause (string) --
The error message describing why the condition expression is malformed.
malformedNodeInputExpression (dict) --
Details about a malformed input expression in a node.
node (string) --
The name of the node containing the malformed input expression.
input (string) --
The name of the input with the malformed expression.
cause (string) --
The error message describing why the input expression is malformed.
mismatchedNodeInputType (dict) --
Details about mismatched input data types in a node.
node (string) --
The name of the node containing the input with the mismatched data type.
input (string) --
The name of the input with the mismatched data type.
expectedType (string) --
The expected data type for the node input.
mismatchedNodeOutputType (dict) --
Details about mismatched output data types in a node.
node (string) --
The name of the node containing the output with the mismatched data type.
output (string) --
The name of the output with the mismatched data type.
expectedType (string) --
The expected data type for the node output.
incompatibleConnectionDataType (dict) --
Details about incompatible data types in a connection.
connection (string) --
The name of the connection with incompatible data types.
missingConnectionConfiguration (dict) --
Details about missing configuration for a connection.
connection (string) --
The name of the connection missing configuration.
missingDefaultCondition (dict) --
Details about a missing default condition in a conditional node.
node (string) --
The name of the node missing the default condition.
missingEndingNodes (dict) --
Details about missing ending nodes in the flow.
missingNodeConfiguration (dict) --
Details about missing configuration for a node.
node (string) --
The name of the node missing a required configuration.
missingNodeInput (dict) --
Details about a missing required input in a node.
node (string) --
The name of the node missing the required input.
input (string) --
The name of the missing input.
missingNodeOutput (dict) --
Details about a missing required output in a node.
node (string) --
The name of the node missing the required output.
output (string) --
The name of the missing output.
missingStartingNodes (dict) --
Details about missing starting nodes in the flow.
multipleNodeInputConnections (dict) --
Details about multiple connections to a single node input.
node (string) --
The name of the node containing the input with multiple connections.
input (string) --
The name of the input with multiple connections to it.
unfulfilledNodeInput (dict) --
Details about an unfulfilled node input with no valid connections.
node (string) --
The name of the node containing the unfulfilled input.
input (string) --
The name of the unfulfilled input. An input is unfulfilled if there are no data connections to it.
unsatisfiedConnectionConditions (dict) --
Details about unsatisfied conditions for a connection.
connection (string) --
The name of the connection with unsatisfied conditions.
unspecified (dict) --
Details about an unspecified validation.
unknownNodeInput (dict) --
Details about an unknown input for a node.
node (string) --
The name of the unknown input.
input (string) --
The name of the node with the unknown input.
unknownNodeOutput (dict) --
Details about an unknown output for a node.
node (string) --
The name of the node with the unknown output.
output (string) --
The name of the unknown output.
missingLoopInputNode (dict) --
Details about a flow that's missing a required LoopInput node in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that's missing a required LoopInput node.
missingLoopControllerNode (dict) --
Details about a flow that's missing a required LoopController node in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that's missing a required LoopController node.
multipleLoopInputNodes (dict) --
Details about a flow that contains multiple LoopInput nodes in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that contains multiple LoopInput nodes.
multipleLoopControllerNodes (dict) --
Details about a flow that contains multiple LoopController nodes in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that contains multiple LoopController nodes.
loopIncompatibleNodeType (dict) --
Details about a flow that includes incompatible node types in a DoWhile loop.
node (string) --
The Loop container node that contains an incompatible node.
incompatibleNodeType (string) --
The node type of the incompatible node in the DoWhile loop. Some node types, like a condition node, aren't allowed in a DoWhile loop.
incompatibleNodeName (string) --
The node that's incompatible in the DoWhile loop.
invalidLoopBoundary (dict) --
Details about a flow that includes connections that violate loop boundary rules.
connection (string) --
The name of the connection that violates loop boundary rules.
source (string) --
The source node of the connection that violates DoWhile loop boundary rules.
target (string) --
The target node of the connection that violates DoWhile loop boundary rules.
type (string) --
The type of validation issue encountered in the flow.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Retrieves information about a version of a flow. For more information, see Deploy a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_flow_version(
flowIdentifier='string',
flowVersion='string'
)
string
[REQUIRED]
The unique identifier of the flow for which to get information.
string
[REQUIRED]
The version of the flow for which to get information.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'executionRoleArn': 'string',
'customerEncryptionKeyArn': 'string',
'id': 'string',
'arn': 'string',
'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
'createdAt': datetime(2015, 1, 1),
'version': 'string',
'definition': {
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {},
'output': {},
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {},
'collector': {},
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {},
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
}
}
Response Structure
(dict) --
name (string) --
The name of the version.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the version of the flow is encrypted with.
id (string) --
The unique identifier of the flow.
arn (string) --
The Amazon Resource Name (ARN) of the flow.
status (string) --
The status of the flow.
createdAt (datetime) --
The time at which the flow was created.
version (string) --
The version of the flow for which information was retrieved.
definition (dict) --
The definition of the nodes and connections between nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) --
A name for the node.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) --
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) --
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) --
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) --
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) --
The type of prompt template.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) --
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) --
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) --
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) --
Specifies a name for the input that you can reference.
type (string) --
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
{'variants': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}
Retrieves information about the working draft ( DRAFT version) of a prompt or a version of it, depending on whether you include the promptVersion field or not. For more information, see View information about prompts using Prompt management and View information about a version of your prompt in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_prompt(
promptIdentifier='string',
promptVersion='string'
)
string
[REQUIRED]
The unique identifier of the prompt.
string
The version of the prompt about which you want to retrieve information. Omit this field to return information about the working draft of the prompt.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'customerEncryptionKeyArn': 'string',
'defaultVariant': 'string',
'variants': [
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
'id': 'string',
'arn': 'string',
'version': 'string',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --
The name of the prompt.
description (string) --
The descriptino of the prompt.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the prompt is encrypted with.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) --
The name of the prompt variant.
templateType (string) --
The type of prompt template to use.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
id (string) --
The unique identifier of the prompt.
arn (string) --
The Amazon Resource Name (ARN) of the prompt or the prompt version (if you specified a version in the request).
version (string) --
The version of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
updatedAt (datetime) --
The time at which the prompt was last updated.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Modifies a flow. Include both fields that you want to keep and fields that you want to change. For more information, see How it works and Create a flow in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.update_flow(
name='string',
description='string',
executionRoleArn='string',
customerEncryptionKeyArn='string',
definition={
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {}
,
'output': {}
,
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {}
,
'any': {}
,
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {}
,
'collector': {}
,
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {}
,
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
},
flowIdentifier='string'
)
string
[REQUIRED]
A name for the flow.
string
A description for the flow.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the service role with permissions to create and manage a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the flow.
dict
A definition of the nodes and the connections between the nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) -- [REQUIRED]
A name for the node.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) -- [REQUIRED]
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) -- [REQUIRED]
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) -- [REQUIRED]
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) -- [REQUIRED]
The type of prompt template.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) -- [REQUIRED]
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) -- [REQUIRED]
The role that the message belongs to.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) -- [REQUIRED]
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) -- [REQUIRED]
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) -- [REQUIRED]
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) -- [REQUIRED]
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) -- [REQUIRED]
Specifies a name for the input that you can reference.
type (string) -- [REQUIRED]
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
string
[REQUIRED]
The unique identifier of the flow.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'executionRoleArn': 'string',
'customerEncryptionKeyArn': 'string',
'id': 'string',
'arn': 'string',
'status': 'Failed'|'Prepared'|'Preparing'|'NotPrepared',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1),
'version': 'string',
'definition': {
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {},
'output': {},
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {},
'collector': {},
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {},
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
}
}
Response Structure
(dict) --
name (string) --
The name of the flow.
description (string) --
The description of the flow.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the service role with permissions to create a flow. For more information, see Create a service role for flows in Amazon Bedrock in the Amazon Bedrock User Guide.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the flow was encrypted with.
id (string) --
The unique identifier of the flow.
arn (string) --
The Amazon Resource Name (ARN) of the flow.
status (string) --
The status of the flow. When you submit this request, the status will be NotPrepared. If updating fails, the status becomes Failed.
createdAt (datetime) --
The time at which the flow was created.
updatedAt (datetime) --
The time at which the flow was last updated.
version (string) --
The version of the flow. When you update a flow, the version updated is the DRAFT version.
definition (dict) --
A definition of the nodes and the connections between nodes in the flow.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) --
A name for the node.
type (string) --
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) --
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) --
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) --
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) --
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) --
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) --
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) --
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) --
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) --
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) --
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) --
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) --
The type of prompt template.
templateConfiguration (dict) --
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) --
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) --
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) --
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) --
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) --
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) --
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) --
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) --
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) --
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) --
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) --
Specifies a name for the input that you can reference.
type (string) --
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) --
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) --
A name for the output that you can reference.
type (string) --
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) --
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) --
A name for the connection that you can reference.
source (string) --
The node that the connection starts at.
target (string) --
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) --
The name of the output in the source node that the connection begins from.
targetInput (string) --
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) --
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
{'variants': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}
Modifies a prompt in your prompt library. Include both fields that you want to keep and fields that you want to replace. For more information, see Prompt management in Amazon Bedrock and Edit prompts in your prompt library in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.update_prompt(
name='string',
description='string',
customerEncryptionKeyArn='string',
defaultVariant='string',
variants=[
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {}
,
'any': {}
,
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
promptIdentifier='string'
)
string
[REQUIRED]
A name for the prompt.
string
A description for the prompt.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
templateConfiguration (dict) -- [REQUIRED]
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) -- [REQUIRED]
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) -- [REQUIRED]
The role that the message belongs to.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) -- [REQUIRED]
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) -- [REQUIRED]
The ARN of the agent with which to use the prompt.
string
[REQUIRED]
The unique identifier of the prompt.
dict
Response Syntax
{
'name': 'string',
'description': 'string',
'customerEncryptionKeyArn': 'string',
'defaultVariant': 'string',
'variants': [
{
'name': 'string',
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {},
'any': {},
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'metadata': [
{
'key': 'string',
'value': 'string'
},
],
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None,
'genAiResource': {
'agent': {
'agentIdentifier': 'string'
}
}
},
],
'id': 'string',
'arn': 'string',
'version': 'string',
'createdAt': datetime(2015, 1, 1),
'updatedAt': datetime(2015, 1, 1)
}
Response Structure
(dict) --
name (string) --
The name of the prompt.
description (string) --
The description of the prompt.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
name (string) --
The name of the prompt variant.
templateType (string) --
The type of prompt template to use.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) --
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) --
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) --
The role that the message belongs to.
content (list) --
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) --
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) --
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) --
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) --
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) --
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) --
The name of the tool.
modelId (string) --
The unique identifier of the model or inference profile with which to run inference on the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
additionalModelRequestFields (:ref:`document<document>`) --
Contains model-specific inference configurations that aren't in the inferenceConfiguration field. To see model-specific inference parameters, see Inference request parameters and response fields for foundation models.
genAiResource (dict) --
Specifies a generative AI resource with which to use the prompt.
agent (dict) --
Specifies an Amazon Bedrock agent with which to use the prompt.
agentIdentifier (string) --
The ARN of the agent with which to use the prompt.
id (string) --
The unique identifier of the prompt.
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
version (string) --
The version of the prompt. When you update a prompt, the version updated is the DRAFT version.
createdAt (datetime) --
The time at which the prompt was created.
updatedAt (datetime) --
The time at which the prompt was last updated.
{'definition': {'nodes': {'configuration': {'prompt': {'sourceConfiguration': {'inline': {'templateConfiguration': {'chat': {'toolConfiguration': {'tools': {'toolSpec': {'strict': 'boolean'}}}}}}}}}}}}
Validates the definition of a flow.
See also: AWS API Documentation
Request Syntax
client.validate_flow_definition(
definition={
'nodes': [
{
'name': 'string',
'type': 'Input'|'Output'|'KnowledgeBase'|'Condition'|'Lex'|'Prompt'|'LambdaFunction'|'Storage'|'Agent'|'Retrieval'|'Iterator'|'Collector'|'InlineCode'|'Loop'|'LoopInput'|'LoopController',
'configuration': {
'input': {}
,
'output': {}
,
'knowledgeBase': {
'knowledgeBaseId': 'string',
'modelId': 'string',
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
},
'numberOfResults': 123,
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'rerankingConfiguration': {
'type': 'BEDROCK_RERANKING_MODEL',
'bedrockRerankingConfiguration': {
'modelConfiguration': {
'modelArn': 'string',
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
}
},
'numberOfRerankedResults': 123,
'metadataConfiguration': {
'selectionMode': 'SELECTIVE'|'ALL',
'selectiveModeConfiguration': {
'fieldsToInclude': [
{
'fieldName': 'string'
},
],
'fieldsToExclude': [
{
'fieldName': 'string'
},
]
}
}
}
},
'orchestrationConfiguration': {
'promptTemplate': {
'textPromptTemplate': 'string'
},
'inferenceConfig': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {
'string': {...}|[...]|123|123.4|'string'|True|None
},
'performanceConfig': {
'latency': 'standard'|'optimized'
}
}
},
'condition': {
'conditions': [
{
'name': 'string',
'expression': 'string'
},
]
},
'lex': {
'botAliasArn': 'string',
'localeId': 'string'
},
'prompt': {
'sourceConfiguration': {
'resource': {
'promptArn': 'string'
},
'inline': {
'templateType': 'TEXT'|'CHAT',
'templateConfiguration': {
'text': {
'text': 'string',
'cachePoint': {
'type': 'default'
},
'inputVariables': [
{
'name': 'string'
},
]
},
'chat': {
'messages': [
{
'role': 'user'|'assistant',
'content': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
]
},
],
'system': [
{
'text': 'string',
'cachePoint': {
'type': 'default'
}
},
],
'inputVariables': [
{
'name': 'string'
},
],
'toolConfiguration': {
'tools': [
{
'toolSpec': {
'name': 'string',
'description': 'string',
'inputSchema': {
'json': {...}|[...]|123|123.4|'string'|True|None
},
'strict': True|False
},
'cachePoint': {
'type': 'default'
}
},
],
'toolChoice': {
'auto': {}
,
'any': {}
,
'tool': {
'name': 'string'
}
}
}
}
},
'modelId': 'string',
'inferenceConfiguration': {
'text': {
'temperature': ...,
'topP': ...,
'maxTokens': 123,
'stopSequences': [
'string',
]
}
},
'additionalModelRequestFields': {...}|[...]|123|123.4|'string'|True|None
}
},
'guardrailConfiguration': {
'guardrailIdentifier': 'string',
'guardrailVersion': 'string'
}
},
'lambdaFunction': {
'lambdaArn': 'string'
},
'storage': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'agent': {
'agentAliasArn': 'string'
},
'retrieval': {
'serviceConfiguration': {
's3': {
'bucketName': 'string'
}
}
},
'iterator': {}
,
'collector': {}
,
'inlineCode': {
'code': 'string',
'language': 'Python_3'
},
'loop': {
'definition': {'... recursive ...'}
},
'loopInput': {}
,
'loopController': {
'continueCondition': {
'name': 'string',
'expression': 'string'
},
'maxIterations': 123
}
},
'inputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array',
'expression': 'string',
'category': 'LoopCondition'|'ReturnValueToLoopStart'|'ExitLoop'
},
],
'outputs': [
{
'name': 'string',
'type': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
]
},
],
'connections': [
{
'type': 'Data'|'Conditional',
'name': 'string',
'source': 'string',
'target': 'string',
'configuration': {
'data': {
'sourceOutput': 'string',
'targetInput': 'string'
},
'conditional': {
'condition': 'string'
}
}
},
]
}
)
dict
[REQUIRED]
The definition of a flow to validate.
nodes (list) --
An array of node definitions in the flow.
(dict) --
Contains configurations about a node in the flow.
name (string) -- [REQUIRED]
A name for the node.
type (string) -- [REQUIRED]
The type of node. This value must match the name of the key that you provide in the configuration you provide in the FlowNodeConfiguration field.
configuration (dict) --
Contains configurations for the node.
input (dict) --
Contains configurations for an input flow node in your flow. The first node in the flow. inputs can't be specified for this node.
output (dict) --
Contains configurations for an output flow node in your flow. The last node in the flow. outputs can't be specified for this node.
knowledgeBase (dict) --
Contains configurations for a knowledge base node in your flow. Queries a knowledge base and returns the retrieved results or generated response.
knowledgeBaseId (string) -- [REQUIRED]
The unique identifier of the knowledge base to query.
modelId (string) --
The unique identifier of the model or inference profile to use to generate a response from the query results. Omit this field if you want to return the retrieved results as an array.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply during query and response generation for the knowledge base in this configuration.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
numberOfResults (integer) --
The number of results to retrieve from the knowledge base.
promptTemplate (dict) --
A custom prompt template to use with the knowledge base for generating responses.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
rerankingConfiguration (dict) --
The configuration for reranking the retrieved results from the knowledge base to improve relevance.
type (string) -- [REQUIRED]
Specifies the type of reranking model to use. Currently, the only supported value is BEDROCK_RERANKING_MODEL.
bedrockRerankingConfiguration (dict) --
Specifies the configuration for using an Amazon Bedrock reranker model to rerank retrieved results.
modelConfiguration (dict) -- [REQUIRED]
Specifies the configuration for the Amazon Bedrock reranker model.
modelArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Bedrock reranker model.
additionalModelRequestFields (dict) --
Specifies additional model-specific request parameters as key-value pairs that are included in the request to the Amazon Bedrock reranker model.
(string) --
(:ref:`document<document>`) --
numberOfRerankedResults (integer) --
Specifies the number of results to return after reranking.
metadataConfiguration (dict) --
Specifies how metadata fields should be handled during the reranking process.
selectionMode (string) -- [REQUIRED]
The mode for selecting metadata fields for reranking.
selectiveModeConfiguration (dict) --
The configuration for selective metadata field inclusion or exclusion during reranking.
fieldsToInclude (list) --
Specifies the metadata fields to include in the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
fieldsToExclude (list) --
Specifies the metadata fields to exclude from the reranking process.
(dict) --
Specifies a metadata field to include or exclude during the reranking process.
fieldName (string) -- [REQUIRED]
The name of the metadata field to include or exclude during reranking.
orchestrationConfiguration (dict) --
The configuration for orchestrating the retrieval and generation process in the knowledge base node.
promptTemplate (dict) --
A custom prompt template for orchestrating the retrieval and generation process.
textPromptTemplate (string) --
The text of the prompt template.
inferenceConfig (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (dict) --
The additional model-specific request parameters as key-value pairs to be included in the request to the foundation model.
(string) --
(:ref:`document<document>`) --
performanceConfig (dict) --
The performance configuration options for the knowledge base retrieval and generation process.
latency (string) --
The latency optimization setting.
condition (dict) --
Contains configurations for a condition node in your flow. Defines conditions that lead to different branches of the flow.
conditions (list) -- [REQUIRED]
An array of conditions. Each member contains the name of a condition and an expression that defines the condition.
(dict) --
Defines a condition in the condition node.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
lex (dict) --
Contains configurations for a Lex node in your flow. Invokes an Amazon Lex bot to identify the intent of the input and return the intent as the output.
botAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon Lex bot alias to invoke.
localeId (string) -- [REQUIRED]
The Region to invoke the Amazon Lex bot in.
prompt (dict) --
Contains configurations for a prompt node in your flow. Runs a prompt and generates the model response as the output. You can use a prompt from Prompt management or you can configure one in this node.
sourceConfiguration (dict) -- [REQUIRED]
Specifies whether the prompt is from Prompt management or defined inline.
resource (dict) --
Contains configurations for a prompt from Prompt management.
promptArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the prompt from Prompt management.
inline (dict) --
Contains configurations for a prompt that is defined inline
templateType (string) -- [REQUIRED]
The type of prompt template.
templateConfiguration (dict) -- [REQUIRED]
Contains a prompt and variables in the prompt that can be replaced with values at runtime.
text (dict) --
Contains configurations for the text in a message for a prompt.
text (string) -- [REQUIRED]
The message for the prompt.
cachePoint (dict) --
A cache checkpoint within a template configuration.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
chat (dict) --
Contains configurations to use the prompt in a conversational format.
messages (list) -- [REQUIRED]
Contains messages in the chat for the prompt.
(dict) --
A message input or response from a model. For more information, see Create a prompt using Prompt management.
role (string) -- [REQUIRED]
The role that the message belongs to.
content (list) -- [REQUIRED]
The content in the message.
(dict) --
Contains the content for the message you pass to, or receive from a model. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the message.
cachePoint (dict) --
Creates a cache checkpoint within a message.
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
system (list) --
Contains system prompts to provide context to the model or to describe how it should behave.
(dict) --
Contains a system prompt to provide context to the model or to describe how it should behave. For more information, see Create a prompt using Prompt management.
text (string) --
The text in the system prompt.
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
toolConfiguration (dict) --
Configuration information for the tools that the model can use when generating a response.
tools (list) -- [REQUIRED]
An array of tools to pass to a model.
(dict) --
Contains configurations for a tool that a model can use when generating a response. For more information, see Use a tool to complete an Amazon Bedrock model response.
toolSpec (dict) --
The specification for the tool.
name (string) -- [REQUIRED]
The name of the tool.
description (string) --
The description of the tool.
inputSchema (dict) -- [REQUIRED]
The input schema for the tool.
json (:ref:`document<document>`) --
A JSON object defining the input schema for the tool.
strict (boolean) --
Whether to enforce strict JSON schema adherence for the tool input
cachePoint (dict) --
Creates a cache checkpoint within a tool designation
type (string) -- [REQUIRED]
Indicates that the CachePointBlock is of the default type
toolChoice (dict) --
Defines which tools the model should request when invoked.
auto (dict) --
Defines tools. The model automatically decides whether to call a tool or to generate text instead.
any (dict) --
Defines tools, at least one of which must be requested by the model. No text is generated but the results of tool use are sent back to the model to help generate a response.
tool (dict) --
Defines a specific tool that the model must request. No text is generated but the results of tool use are sent back to the model to help generate a response.
name (string) -- [REQUIRED]
The name of the tool.
modelId (string) -- [REQUIRED]
The unique identifier of the model or inference profile to run inference with.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt.
text (dict) --
Contains inference configurations for a text prompt.
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
additionalModelRequestFields (:ref:`document<document>`) --
Additional fields to be included in the model request for the Prompt node.
guardrailConfiguration (dict) --
Contains configurations for a guardrail to apply to the prompt in this node and the response generated from it.
guardrailIdentifier (string) --
The unique identifier of the guardrail.
guardrailVersion (string) --
The version of the guardrail.
lambdaFunction (dict) --
Contains configurations for a Lambda function node in your flow. Invokes an Lambda function.
lambdaArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Lambda function to invoke.
storage (dict) --
Contains configurations for a storage node in your flow. Stores an input in an Amazon S3 location.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for storing the input into the node.
s3 (dict) --
Contains configurations for the Amazon S3 location in which to store the input into the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket in which to store the input into the node.
agent (dict) --
Contains configurations for an agent node in your flow. Invokes an alias of an agent and returns the response.
agentAliasArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the alias of the agent to invoke.
retrieval (dict) --
Contains configurations for a retrieval node in your flow. Retrieves data from an Amazon S3 location and returns it as the output.
serviceConfiguration (dict) -- [REQUIRED]
Contains configurations for the service to use for retrieving data to return as the output from the node.
s3 (dict) --
Contains configurations for the Amazon S3 location from which to retrieve data to return as the output from the node.
bucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket from which to retrieve data.
iterator (dict) --
Contains configurations for an iterator node in your flow. Takes an input that is an array and iteratively sends each item of the array as an output to the following node. The size of the array is also returned in the output.
The output flow node at the end of the flow iteration will return a response for each member of the array. To return only one response, you can include a collector node downstream from the iterator node.
collector (dict) --
Contains configurations for a collector node in your flow. Collects an iteration of inputs and consolidates them into an array of outputs.
inlineCode (dict) --
Contains configurations for an inline code node in your flow. Inline code nodes let you write and execute code directly within your flow, enabling data transformations, custom logic, and integrations without needing an external Lambda function.
code (string) -- [REQUIRED]
The code that's executed in your inline code node. The code can access input data from previous nodes in the flow, perform operations on that data, and produce output that can be used by other nodes in your flow.
The code must be valid in the programming language that you specify.
language (string) -- [REQUIRED]
The programming language used by your inline code node.
The code must be valid in the programming language that you specify. Currently, only Python 3 ( Python_3) is supported.
loop (dict) --
Contains configurations for a DoWhile loop in your flow.
definition (dict) --
The definition of the DoWhile loop nodes and connections between nodes in the flow.
loopInput (dict) --
Contains input node configurations for a DoWhile loop in your flow.
loopController (dict) --
Contains controller node configurations for a DoWhile loop in your flow.
continueCondition (dict) -- [REQUIRED]
Specifies the condition that determines when the flow exits the DoWhile loop. The loop executes until this condition evaluates to true.
name (string) -- [REQUIRED]
A name for the condition that you can reference.
expression (string) --
Defines the condition. You must refer to at least one of the inputs in the condition. For more information, expand the Condition node section in Node types in prompt flows.
maxIterations (integer) --
Specifies the maximum number of times the DoWhile loop can iterate before the flow exits the loop.
inputs (list) --
An array of objects, each of which contains information about an input into the node.
(dict) --
Contains configurations for an input in an Amazon Bedrock Flows node.
name (string) -- [REQUIRED]
Specifies a name for the input that you can reference.
type (string) -- [REQUIRED]
Specifies the data type of the input. If the input doesn't match this type at runtime, a validation error will be thrown.
expression (string) -- [REQUIRED]
An expression that formats the input for the node. For an explanation of how to create expressions, see Expressions in Prompt flows in Amazon Bedrock.
category (string) --
Specifies how input data flows between iterations in a DoWhile loop.
LoopCondition - Controls whether the loop continues by evaluating condition expressions against the input data. Use this category to define the condition that determines if the loop should continue.
ReturnValueToLoopStart - Defines data to pass back to the start of the loop's next iteration. Use this category for variables that you want to update for each loop iteration.
ExitLoop - Defines the value that's available once the loop ends. Use this category to expose loop results to nodes outside the loop.
outputs (list) --
A list of objects, each of which contains information about an output from the node.
(dict) --
Contains configurations for an output from a node.
name (string) -- [REQUIRED]
A name for the output that you can reference.
type (string) -- [REQUIRED]
The data type of the output. If the output doesn't match this type at runtime, a validation error will be thrown.
connections (list) --
An array of connection definitions in the flow.
(dict) --
Contains information about a connection between two nodes in the flow.
type (string) -- [REQUIRED]
Whether the source node that the connection begins from is a condition node ( Conditional) or not ( Data).
name (string) -- [REQUIRED]
A name for the connection that you can reference.
source (string) -- [REQUIRED]
The node that the connection starts at.
target (string) -- [REQUIRED]
The node that the connection ends at.
configuration (dict) --
The configuration of the connection.
data (dict) --
The configuration of a connection originating from a node that isn't a Condition node.
sourceOutput (string) -- [REQUIRED]
The name of the output in the source node that the connection begins from.
targetInput (string) -- [REQUIRED]
The name of the input in the target node that the connection ends at.
conditional (dict) --
The configuration of a connection originating from a Condition node.
condition (string) -- [REQUIRED]
The condition that triggers this connection. For more information about how to write conditions, see the Condition node type in the Node types topic in the Amazon Bedrock User Guide.
dict
Response Syntax
{
'validations': [
{
'message': 'string',
'severity': 'Warning'|'Error',
'details': {
'cyclicConnection': {
'connection': 'string'
},
'duplicateConnections': {
'source': 'string',
'target': 'string'
},
'duplicateConditionExpression': {
'node': 'string',
'expression': 'string'
},
'unreachableNode': {
'node': 'string'
},
'unknownConnectionSource': {
'connection': 'string'
},
'unknownConnectionSourceOutput': {
'connection': 'string'
},
'unknownConnectionTarget': {
'connection': 'string'
},
'unknownConnectionTargetInput': {
'connection': 'string'
},
'unknownConnectionCondition': {
'connection': 'string'
},
'malformedConditionExpression': {
'node': 'string',
'condition': 'string',
'cause': 'string'
},
'malformedNodeInputExpression': {
'node': 'string',
'input': 'string',
'cause': 'string'
},
'mismatchedNodeInputType': {
'node': 'string',
'input': 'string',
'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
'mismatchedNodeOutputType': {
'node': 'string',
'output': 'string',
'expectedType': 'String'|'Number'|'Boolean'|'Object'|'Array'
},
'incompatibleConnectionDataType': {
'connection': 'string'
},
'missingConnectionConfiguration': {
'connection': 'string'
},
'missingDefaultCondition': {
'node': 'string'
},
'missingEndingNodes': {},
'missingNodeConfiguration': {
'node': 'string'
},
'missingNodeInput': {
'node': 'string',
'input': 'string'
},
'missingNodeOutput': {
'node': 'string',
'output': 'string'
},
'missingStartingNodes': {},
'multipleNodeInputConnections': {
'node': 'string',
'input': 'string'
},
'unfulfilledNodeInput': {
'node': 'string',
'input': 'string'
},
'unsatisfiedConnectionConditions': {
'connection': 'string'
},
'unspecified': {},
'unknownNodeInput': {
'node': 'string',
'input': 'string'
},
'unknownNodeOutput': {
'node': 'string',
'output': 'string'
},
'missingLoopInputNode': {
'loopNode': 'string'
},
'missingLoopControllerNode': {
'loopNode': 'string'
},
'multipleLoopInputNodes': {
'loopNode': 'string'
},
'multipleLoopControllerNodes': {
'loopNode': 'string'
},
'loopIncompatibleNodeType': {
'node': 'string',
'incompatibleNodeType': 'Input'|'Condition'|'Iterator'|'Collector',
'incompatibleNodeName': 'string'
},
'invalidLoopBoundary': {
'connection': 'string',
'source': 'string',
'target': 'string'
}
},
'type': 'CyclicConnection'|'DuplicateConnections'|'DuplicateConditionExpression'|'UnreachableNode'|'UnknownConnectionSource'|'UnknownConnectionSourceOutput'|'UnknownConnectionTarget'|'UnknownConnectionTargetInput'|'UnknownConnectionCondition'|'MalformedConditionExpression'|'MalformedNodeInputExpression'|'MismatchedNodeInputType'|'MismatchedNodeOutputType'|'IncompatibleConnectionDataType'|'MissingConnectionConfiguration'|'MissingDefaultCondition'|'MissingEndingNodes'|'MissingNodeConfiguration'|'MissingNodeInput'|'MissingNodeOutput'|'MissingStartingNodes'|'MultipleNodeInputConnections'|'UnfulfilledNodeInput'|'UnsatisfiedConnectionConditions'|'Unspecified'|'UnknownNodeInput'|'UnknownNodeOutput'|'MissingLoopInputNode'|'MissingLoopControllerNode'|'MultipleLoopInputNodes'|'MultipleLoopControllerNodes'|'LoopIncompatibleNodeType'|'InvalidLoopBoundary'
},
]
}
Response Structure
(dict) --
validations (list) --
Contains an array of objects, each of which contains an error identified by validation.
(dict) --
Contains information about validation of the flow.
This data type is used in the following API operations:
message (string) --
A message describing the validation error.
severity (string) --
The severity of the issue described in the message.
details (dict) --
Specific details about the validation issue encountered in the flow.
cyclicConnection (dict) --
Details about a cyclic connection in the flow.
connection (string) --
The name of the connection that causes the cycle in the flow.
duplicateConnections (dict) --
Details about duplicate connections between nodes.
source (string) --
The name of the source node where the duplicate connection starts.
target (string) --
The name of the target node where the duplicate connection ends.
duplicateConditionExpression (dict) --
Details about duplicate condition expressions in a node.
node (string) --
The name of the node containing the duplicate condition expressions.
expression (string) --
The duplicated condition expression.
unreachableNode (dict) --
Details about an unreachable node in the flow.
node (string) --
The name of the unreachable node.
unknownConnectionSource (dict) --
Details about an unknown source node for a connection.
connection (string) --
The name of the connection with the unknown source.
unknownConnectionSourceOutput (dict) --
Details about an unknown source output for a connection.
connection (string) --
The name of the connection with the unknown source output.
unknownConnectionTarget (dict) --
Details about an unknown target node for a connection.
connection (string) --
The name of the connection with the unknown target.
unknownConnectionTargetInput (dict) --
Details about an unknown target input for a connection.
connection (string) --
The name of the connection with the unknown target input.
unknownConnectionCondition (dict) --
Details about an unknown condition for a connection.
connection (string) --
The name of the connection with the unknown condition.
malformedConditionExpression (dict) --
Details about a malformed condition expression in a node.
node (string) --
The name of the node containing the malformed condition expression.
condition (string) --
The name of the malformed condition.
cause (string) --
The error message describing why the condition expression is malformed.
malformedNodeInputExpression (dict) --
Details about a malformed input expression in a node.
node (string) --
The name of the node containing the malformed input expression.
input (string) --
The name of the input with the malformed expression.
cause (string) --
The error message describing why the input expression is malformed.
mismatchedNodeInputType (dict) --
Details about mismatched input data types in a node.
node (string) --
The name of the node containing the input with the mismatched data type.
input (string) --
The name of the input with the mismatched data type.
expectedType (string) --
The expected data type for the node input.
mismatchedNodeOutputType (dict) --
Details about mismatched output data types in a node.
node (string) --
The name of the node containing the output with the mismatched data type.
output (string) --
The name of the output with the mismatched data type.
expectedType (string) --
The expected data type for the node output.
incompatibleConnectionDataType (dict) --
Details about incompatible data types in a connection.
connection (string) --
The name of the connection with incompatible data types.
missingConnectionConfiguration (dict) --
Details about missing configuration for a connection.
connection (string) --
The name of the connection missing configuration.
missingDefaultCondition (dict) --
Details about a missing default condition in a conditional node.
node (string) --
The name of the node missing the default condition.
missingEndingNodes (dict) --
Details about missing ending nodes in the flow.
missingNodeConfiguration (dict) --
Details about missing configuration for a node.
node (string) --
The name of the node missing a required configuration.
missingNodeInput (dict) --
Details about a missing required input in a node.
node (string) --
The name of the node missing the required input.
input (string) --
The name of the missing input.
missingNodeOutput (dict) --
Details about a missing required output in a node.
node (string) --
The name of the node missing the required output.
output (string) --
The name of the missing output.
missingStartingNodes (dict) --
Details about missing starting nodes in the flow.
multipleNodeInputConnections (dict) --
Details about multiple connections to a single node input.
node (string) --
The name of the node containing the input with multiple connections.
input (string) --
The name of the input with multiple connections to it.
unfulfilledNodeInput (dict) --
Details about an unfulfilled node input with no valid connections.
node (string) --
The name of the node containing the unfulfilled input.
input (string) --
The name of the unfulfilled input. An input is unfulfilled if there are no data connections to it.
unsatisfiedConnectionConditions (dict) --
Details about unsatisfied conditions for a connection.
connection (string) --
The name of the connection with unsatisfied conditions.
unspecified (dict) --
Details about an unspecified validation.
unknownNodeInput (dict) --
Details about an unknown input for a node.
node (string) --
The name of the unknown input.
input (string) --
The name of the node with the unknown input.
unknownNodeOutput (dict) --
Details about an unknown output for a node.
node (string) --
The name of the node with the unknown output.
output (string) --
The name of the unknown output.
missingLoopInputNode (dict) --
Details about a flow that's missing a required LoopInput node in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that's missing a required LoopInput node.
missingLoopControllerNode (dict) --
Details about a flow that's missing a required LoopController node in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that's missing a required LoopController node.
multipleLoopInputNodes (dict) --
Details about a flow that contains multiple LoopInput nodes in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that contains multiple LoopInput nodes.
multipleLoopControllerNodes (dict) --
Details about a flow that contains multiple LoopController nodes in a DoWhile loop.
loopNode (string) --
The DoWhile loop in a flow that contains multiple LoopController nodes.
loopIncompatibleNodeType (dict) --
Details about a flow that includes incompatible node types in a DoWhile loop.
node (string) --
The Loop container node that contains an incompatible node.
incompatibleNodeType (string) --
The node type of the incompatible node in the DoWhile loop. Some node types, like a condition node, aren't allowed in a DoWhile loop.
incompatibleNodeName (string) --
The node that's incompatible in the DoWhile loop.
invalidLoopBoundary (dict) --
Details about a flow that includes connections that violate loop boundary rules.
connection (string) --
The name of the connection that violates loop boundary rules.
source (string) --
The source node of the connection that violates DoWhile loop boundary rules.
target (string) --
The target node of the connection that violates DoWhile loop boundary rules.
type (string) --
The type of validation issue encountered in the flow.