2024/09/04 - Agents for Amazon Bedrock - 4 updated api methods
Changes Add support for user metadata inside PromptVariant.
{'variants': {'metadata': [{'key': 'string', 'value': 'string'}]}}
Creates a prompt in your prompt library that you can add to a flow. For more information, see Prompt management in Amazon Bedrock, Create a prompt using Prompt management and Prompt flows in Amazon Bedrock in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt( clientToken='string', customerEncryptionKeyArn='string', defaultVariant='string', description='string', name='string', tags={ 'string': 'string' }, variants=[ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ] )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
string
A description for the prompt.
string
[REQUIRED]
A name for the prompt.
dict
Any tags that you want to attach to the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that you encrypted the prompt with.
defaultVariant (string) --
The name of the default variant for your prompt.
description (string) --
The description of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt. When you create a prompt, the version created is the DRAFT version.
{'variants': {'metadata': [{'key': 'string', 'value': 'string'}]}}
Creates a static snapshot of your prompt that can be deployed to production. For more information, see Deploy prompts using Prompt management by creating versions in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.create_prompt_version( clientToken='string', description='string', promptIdentifier='string', tags={ 'string': 'string' } )
string
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
string
A description for the version of the prompt.
string
[REQUIRED]
The unique identifier of the prompt that you want to create a version of.
dict
Any tags that you want to attach to the version of the prompt. For more information, see Tagging resources in Amazon Bedrock.
(string) --
(string) --
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the version of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the version of the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
A description for the version.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt that was created. Versions are numbered incrementally, starting from 1.
{'variants': {'metadata': [{'key': 'string', 'value': 'string'}]}}
Retrieves information about the working draft ( DRAFT version) of a prompt or a version of it, depending on whether you include the promptVersion field or not. For more information, see View information about prompts using Prompt management and View information about a version of your prompt in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.get_prompt( promptIdentifier='string', promptVersion='string' )
string
[REQUIRED]
The unique identifier of the prompt.
string
The version of the prompt about which you want to retrieve information. Omit this field to return information about the working draft of the prompt.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt or the prompt version (if you specified a version in the request).
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key that the prompt is encrypted with.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
The descriptino of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt.
{'variants': {'metadata': [{'key': 'string', 'value': 'string'}]}}
Modifies a prompt in your prompt library. Include both fields that you want to keep and fields that you want to replace. For more information, see Prompt management in Amazon Bedrock and Edit prompts in your prompt library in the Amazon Bedrock User Guide.
See also: AWS API Documentation
Request Syntax
client.update_prompt( customerEncryptionKeyArn='string', defaultVariant='string', description='string', name='string', promptIdentifier='string', variants=[ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ] )
string
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
string
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
string
A description for the prompt.
string
[REQUIRED]
A name for the prompt.
string
[REQUIRED]
The unique identifier of the prompt.
list
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) -- [REQUIRED]
The key of a metadata tag for a prompt variant.
value (string) -- [REQUIRED]
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) -- [REQUIRED]
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) -- [REQUIRED]
The message for the prompt.
templateType (string) -- [REQUIRED]
The type of prompt template to use.
dict
Response Syntax
{ 'arn': 'string', 'createdAt': datetime(2015, 1, 1), 'customerEncryptionKeyArn': 'string', 'defaultVariant': 'string', 'description': 'string', 'id': 'string', 'name': 'string', 'updatedAt': datetime(2015, 1, 1), 'variants': [ { 'inferenceConfiguration': { 'text': { 'maxTokens': 123, 'stopSequences': [ 'string', ], 'temperature': ..., 'topK': 123, 'topP': ... } }, 'metadata': [ { 'key': 'string', 'value': 'string' }, ], 'modelId': 'string', 'name': 'string', 'templateConfiguration': { 'text': { 'inputVariables': [ { 'name': 'string' }, ], 'text': 'string' } }, 'templateType': 'TEXT' }, ], 'version': 'string' }
Response Structure
(dict) --
arn (string) --
The Amazon Resource Name (ARN) of the prompt.
createdAt (datetime) --
The time at which the prompt was created.
customerEncryptionKeyArn (string) --
The Amazon Resource Name (ARN) of the KMS key to encrypt the prompt.
defaultVariant (string) --
The name of the default variant for the prompt. This value must match the name field in the relevant PromptVariant object.
description (string) --
The description of the prompt.
id (string) --
The unique identifier of the prompt.
name (string) --
The name of the prompt.
updatedAt (datetime) --
The time at which the prompt was last updated.
variants (list) --
A list of objects, each containing details about a variant of the prompt.
(dict) --
Contains details about a variant of the prompt.
inferenceConfiguration (dict) --
Contains inference configurations for the prompt variant.
text (dict) --
Contains inference configurations for a text prompt.
maxTokens (integer) --
The maximum number of tokens to return in the response.
stopSequences (list) --
A list of strings that define sequences after which the model will stop generating.
(string) --
temperature (float) --
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topK (integer) --
The number of most-likely candidates that the model considers for the next token during generation.
topP (float) --
The percentage of most-likely candidates that the model considers for the next token.
metadata (list) --
An array of objects, each containing a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
(dict) --
Contains a key-value pair that defines a metadata tag and value to attach to a prompt variant. For more information, see Create a prompt using Prompt management.
key (string) --
The key of a metadata tag for a prompt variant.
value (string) --
The value of a metadata tag for a prompt variant.
modelId (string) --
The unique identifier of the model with which to run inference on the prompt.
name (string) --
The name of the prompt variant.
templateConfiguration (dict) --
Contains configurations for the prompt template.
text (dict) --
Contains configurations for the text in a message for a prompt.
inputVariables (list) --
An array of the variables in the prompt template.
(dict) --
Contains information about a variable in the prompt.
name (string) --
The name of the variable.
text (string) --
The message for the prompt.
templateType (string) --
The type of prompt template to use.
version (string) --
The version of the prompt. When you update a prompt, the version updated is the DRAFT version.