Amazon Bedrock

2024/04/23 - Amazon Bedrock - 10 new api methods

Changes  This release introduces Model Evaluation and Guardrails for Amazon Bedrock.

GetEvaluationJob (new) Link ¶

Retrieves the properties associated with a model evaluation job, including the status of the job. For more information, see Model evaluations.

See also: AWS API Documentation

Request Syntax

client.get_evaluation_job(
    jobIdentifier='string'
)
type jobIdentifier:

string

param jobIdentifier:

[REQUIRED]

The Amazon Resource Name (ARN) of the model evaluation job.

rtype:

dict

returns:

Response Syntax

{
    'jobName': 'string',
    'status': 'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped',
    'jobArn': 'string',
    'jobDescription': 'string',
    'roleArn': 'string',
    'customerEncryptionKeyId': 'string',
    'jobType': 'Human'|'Automated',
    'evaluationConfig': {
        'automated': {
            'datasetMetricConfigs': [
                {
                    'taskType': 'Summarization'|'Classification'|'QuestionAndAnswer'|'Generation'|'Custom',
                    'dataset': {
                        'name': 'string',
                        'datasetLocation': {
                            's3Uri': 'string'
                        }
                    },
                    'metricNames': [
                        'string',
                    ]
                },
            ]
        },
        'human': {
            'humanWorkflowConfig': {
                'flowDefinitionArn': 'string',
                'instructions': 'string'
            },
            'customMetrics': [
                {
                    'name': 'string',
                    'description': 'string',
                    'ratingMethod': 'string'
                },
            ],
            'datasetMetricConfigs': [
                {
                    'taskType': 'Summarization'|'Classification'|'QuestionAndAnswer'|'Generation'|'Custom',
                    'dataset': {
                        'name': 'string',
                        'datasetLocation': {
                            's3Uri': 'string'
                        }
                    },
                    'metricNames': [
                        'string',
                    ]
                },
            ]
        }
    },
    'inferenceConfig': {
        'models': [
            {
                'bedrockModel': {
                    'modelIdentifier': 'string',
                    'inferenceParams': 'string'
                }
            },
        ]
    },
    'outputDataConfig': {
        's3Uri': 'string'
    },
    'creationTime': datetime(2015, 1, 1),
    'lastModifiedTime': datetime(2015, 1, 1),
    'failureMessages': [
        'string',
    ]
}

Response Structure

  • (dict) --

    • jobName (string) --

      The name of the model evaluation job.

    • status (string) --

      The status of the model evaluation job.

    • jobArn (string) --

      The Amazon Resource Name (ARN) of the model evaluation job.

    • jobDescription (string) --

      The description of the model evaluation job.

    • roleArn (string) --

      The Amazon Resource Name (ARN) of the IAM service role used in the model evaluation job.

    • customerEncryptionKeyId (string) --

      The Amazon Resource Name (ARN) of the customer managed key specified when the model evaluation job was created.

    • jobType (string) --

      The type of model evaluation job.

    • evaluationConfig (dict) --

      Contains details about the type of model evaluation job, the metrics used, the task type selected, the datasets used, and any custom metrics you defined.

      • automated (dict) --

        Used to specify an automated model evaluation job. See AutomatedEvaluationConfig to view the required parameters.

        • datasetMetricConfigs (list) --

          Specifies the required elements for an automatic model evaluation job.

          • (dict) --

            Defines the built-in prompt datasets, built-in metric names and custom metric names, and the task type.

            • taskType (string) --

              The task type you want the model to carry out.

            • dataset (dict) --

              Specifies the prompt dataset.

              • name (string) --

                Used to specify supported built-in prompt datasets. Valid values are Builtin.Bold, Builtin.BoolQ, Builtin.NaturalQuestions, Builtin.Gigaword, Builtin.RealToxicityPrompts, Builtin.TriviaQa, Builtin.T-Rex, Builtin.WomensEcommerceClothingReviews and Builtin.Wikitext2.

              • datasetLocation (dict) --

                For custom prompt datasets, you must specify the location in Amazon S3 where the prompt dataset is saved.

                • s3Uri (string) --

                  The S3 URI of the S3 bucket specified in the job.

            • metricNames (list) --

              The names of the metrics used. For automated model evaluation jobs valid values are "Builtin.Accuracy", "Builtin.Robustness", and "Builtin.Toxicity". In human-based model evaluation jobs the array of strings must match the name parameter specified in HumanEvaluationCustomMetric.

              • (string) --

      • human (dict) --

        Used to specify a model evaluation job that uses human workers.See HumanEvaluationConfig to view the required parameters.

        • humanWorkflowConfig (dict) --

          The parameters of the human workflow.

          • flowDefinitionArn (string) --

            The Amazon Resource Number (ARN) for the flow definition

          • instructions (string) --

            Instructions for the flow definition

        • customMetrics (list) --

          A HumanEvaluationCustomMetric object. It contains the names the metrics, how the metrics are to be evaluated, an optional description.

          • (dict) --

            In a model evaluation job that uses human workers you must define the name of the metric, and how you want that metric rated ratingMethod, and an optional description of the metric.

            • name (string) --

              The name of the metric. Your human evaluators will see this name in the evaluation UI.

            • description (string) --

              An optional description of the metric. Use this parameter to provide more details about the metric.

            • ratingMethod (string) --

              Choose how you want your human workers to evaluation your model. Valid values for rating methods are ThumbsUpDown, IndividualLikertScale, ComparisonLikertScale, ComparisonChoice, and ComparisonRank

        • datasetMetricConfigs (list) --

          Use to specify the metrics, task, and prompt dataset to be used in your model evaluation job.

          • (dict) --

            Defines the built-in prompt datasets, built-in metric names and custom metric names, and the task type.

            • taskType (string) --

              The task type you want the model to carry out.

            • dataset (dict) --

              Specifies the prompt dataset.

              • name (string) --

                Used to specify supported built-in prompt datasets. Valid values are Builtin.Bold, Builtin.BoolQ, Builtin.NaturalQuestions, Builtin.Gigaword, Builtin.RealToxicityPrompts, Builtin.TriviaQa, Builtin.T-Rex, Builtin.WomensEcommerceClothingReviews and Builtin.Wikitext2.

              • datasetLocation (dict) --

                For custom prompt datasets, you must specify the location in Amazon S3 where the prompt dataset is saved.

                • s3Uri (string) --

                  The S3 URI of the S3 bucket specified in the job.

            • metricNames (list) --

              The names of the metrics used. For automated model evaluation jobs valid values are "Builtin.Accuracy", "Builtin.Robustness", and "Builtin.Toxicity". In human-based model evaluation jobs the array of strings must match the name parameter specified in HumanEvaluationCustomMetric.

              • (string) --

    • inferenceConfig (dict) --

      Details about the models you specified in your model evaluation job.

      • models (list) --

        Used to specify the models.

        • (dict) --

          Defines the models used in the model evaluation job.

          • bedrockModel (dict) --

            Defines the Amazon Bedrock model and inference parameters you want used.

            • modelIdentifier (string) --

              The ARN of the Amazon Bedrock model specified.

            • inferenceParams (string) --

              Each Amazon Bedrock support different inference parameters that change how the model behaves during inference.

    • outputDataConfig (dict) --

      Amazon S3 location for where output data is saved.

      • s3Uri (string) --

        The Amazon S3 URI where the results of model evaluation job are saved.

    • creationTime (datetime) --

      When the model evaluation job was created.

    • lastModifiedTime (datetime) --

      When the model evaluation job was last modified.

    • failureMessages (list) --

      An array of strings the specify why the model evaluation job has failed.

      • (string) --

ListEvaluationJobs (new) Link ¶

Lists model evaluation jobs.

See also: AWS API Documentation

Request Syntax

client.list_evaluation_jobs(
    creationTimeAfter=datetime(2015, 1, 1),
    creationTimeBefore=datetime(2015, 1, 1),
    statusEquals='InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped',
    nameContains='string',
    maxResults=123,
    nextToken='string',
    sortBy='CreationTime',
    sortOrder='Ascending'|'Descending'
)
type creationTimeAfter:

datetime

param creationTimeAfter:

A filter that includes model evaluation jobs created after the time specified.

type creationTimeBefore:

datetime

param creationTimeBefore:

A filter that includes model evaluation jobs created prior to the time specified.

type statusEquals:

string

param statusEquals:

Only return jobs where the status condition is met.

type nameContains:

string

param nameContains:

Query parameter string for model evaluation job names.

type maxResults:

integer

param maxResults:

The maximum number of results to return.

type nextToken:

string

param nextToken:

Continuation token from the previous response, for Amazon Bedrock to list the next set of results.

type sortBy:

string

param sortBy:

Allows you to sort model evaluation jobs by when they were created.

type sortOrder:

string

param sortOrder:

How you want the order of jobs sorted.

rtype:

dict

returns:

Response Syntax

{
    'nextToken': 'string',
    'jobSummaries': [
        {
            'jobArn': 'string',
            'jobName': 'string',
            'status': 'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped',
            'creationTime': datetime(2015, 1, 1),
            'jobType': 'Human'|'Automated',
            'evaluationTaskTypes': [
                'Summarization'|'Classification'|'QuestionAndAnswer'|'Generation'|'Custom',
            ],
            'modelIdentifiers': [
                'string',
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • nextToken (string) --

      Continuation token from the previous response, for Amazon Bedrock to list the next set of results.

    • jobSummaries (list) --

      A summary of the model evaluation jobs.

      • (dict) --

        A summary of the model evaluation job.

        • jobArn (string) --

          The Amazon Resource Name (ARN) of the model evaluation job.

        • jobName (string) --

          The name of the model evaluation job.

        • status (string) --

          The current status of the model evaluation job.

        • creationTime (datetime) --

          When the model evaluation job was created.

        • jobType (string) --

          The type, either human or automatic, of model evaluation job.

        • evaluationTaskTypes (list) --

          What task type was used in the model evaluation job.

          • (string) --

        • modelIdentifiers (list) --

          The Amazon Resource Names (ARNs) of the model(s) used in the model evaluation job.

          • (string) --

CreateEvaluationJob (new) Link ¶

API operation for creating and managing Amazon Bedrock automatic model evaluation jobs and model evaluation jobs that use human workers. To learn more about the requirements for creating a model evaluation job see, Model evaluations.

See also: AWS API Documentation

Request Syntax

client.create_evaluation_job(
    jobName='string',
    jobDescription='string',
    clientRequestToken='string',
    roleArn='string',
    customerEncryptionKeyId='string',
    jobTags=[
        {
            'key': 'string',
            'value': 'string'
        },
    ],
    evaluationConfig={
        'automated': {
            'datasetMetricConfigs': [
                {
                    'taskType': 'Summarization'|'Classification'|'QuestionAndAnswer'|'Generation'|'Custom',
                    'dataset': {
                        'name': 'string',
                        'datasetLocation': {
                            's3Uri': 'string'
                        }
                    },
                    'metricNames': [
                        'string',
                    ]
                },
            ]
        },
        'human': {
            'humanWorkflowConfig': {
                'flowDefinitionArn': 'string',
                'instructions': 'string'
            },
            'customMetrics': [
                {
                    'name': 'string',
                    'description': 'string',
                    'ratingMethod': 'string'
                },
            ],
            'datasetMetricConfigs': [
                {
                    'taskType': 'Summarization'|'Classification'|'QuestionAndAnswer'|'Generation'|'Custom',
                    'dataset': {
                        'name': 'string',
                        'datasetLocation': {
                            's3Uri': 'string'
                        }
                    },
                    'metricNames': [
                        'string',
                    ]
                },
            ]
        }
    },
    inferenceConfig={
        'models': [
            {
                'bedrockModel': {
                    'modelIdentifier': 'string',
                    'inferenceParams': 'string'
                }
            },
        ]
    },
    outputDataConfig={
        's3Uri': 'string'
    }
)
type jobName:

string

param jobName:

[REQUIRED]

The name of the model evaluation job. Model evaluation job names must unique with your AWS account, and your account's AWS region.

type jobDescription:

string

param jobDescription:

A description of the model evaluation job.

type clientRequestToken:

string

param clientRequestToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

This field is autopopulated if not provided.

type roleArn:

string

param roleArn:

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM service role that Amazon Bedrock can assume to perform tasks on your behalf. The service role must have Amazon Bedrock as the service principal, and provide access to any Amazon S3 buckets specified in the EvaluationConfig object. To pass this role to Amazon Bedrock, the caller of this API must have the iam:PassRole permission. To learn more about the required permissions, see Required permissions.

type customerEncryptionKeyId:

string

param customerEncryptionKeyId:

Specify your customer managed key ARN that will be used to encrypt your model evaluation job.

type jobTags:

list

param jobTags:

Tags to attach to the model evaluation job.

  • (dict) --

    Definition of the key/value pair for a tag.

    • key (string) -- [REQUIRED]

      Key for the tag.

    • value (string) -- [REQUIRED]

      Value for the tag.

type evaluationConfig:

dict

param evaluationConfig:

[REQUIRED]

Specifies whether the model evaluation job is automatic or uses human worker.

  • automated (dict) --

    Used to specify an automated model evaluation job. See AutomatedEvaluationConfig to view the required parameters.

    • datasetMetricConfigs (list) -- [REQUIRED]

      Specifies the required elements for an automatic model evaluation job.

      • (dict) --

        Defines the built-in prompt datasets, built-in metric names and custom metric names, and the task type.

        • taskType (string) -- [REQUIRED]

          The task type you want the model to carry out.

        • dataset (dict) -- [REQUIRED]

          Specifies the prompt dataset.

          • name (string) -- [REQUIRED]

            Used to specify supported built-in prompt datasets. Valid values are Builtin.Bold, Builtin.BoolQ, Builtin.NaturalQuestions, Builtin.Gigaword, Builtin.RealToxicityPrompts, Builtin.TriviaQa, Builtin.T-Rex, Builtin.WomensEcommerceClothingReviews and Builtin.Wikitext2.

          • datasetLocation (dict) --

            For custom prompt datasets, you must specify the location in Amazon S3 where the prompt dataset is saved.

            • s3Uri (string) --

              The S3 URI of the S3 bucket specified in the job.

        • metricNames (list) -- [REQUIRED]

          The names of the metrics used. For automated model evaluation jobs valid values are "Builtin.Accuracy", "Builtin.Robustness", and "Builtin.Toxicity". In human-based model evaluation jobs the array of strings must match the name parameter specified in HumanEvaluationCustomMetric.

          • (string) --

  • human (dict) --

    Used to specify a model evaluation job that uses human workers.See HumanEvaluationConfig to view the required parameters.

    • humanWorkflowConfig (dict) --

      The parameters of the human workflow.

      • flowDefinitionArn (string) -- [REQUIRED]

        The Amazon Resource Number (ARN) for the flow definition

      • instructions (string) --

        Instructions for the flow definition

    • customMetrics (list) --

      A HumanEvaluationCustomMetric object. It contains the names the metrics, how the metrics are to be evaluated, an optional description.

      • (dict) --

        In a model evaluation job that uses human workers you must define the name of the metric, and how you want that metric rated ratingMethod, and an optional description of the metric.

        • name (string) -- [REQUIRED]

          The name of the metric. Your human evaluators will see this name in the evaluation UI.

        • description (string) --

          An optional description of the metric. Use this parameter to provide more details about the metric.

        • ratingMethod (string) -- [REQUIRED]

          Choose how you want your human workers to evaluation your model. Valid values for rating methods are ThumbsUpDown, IndividualLikertScale, ComparisonLikertScale, ComparisonChoice, and ComparisonRank

    • datasetMetricConfigs (list) -- [REQUIRED]

      Use to specify the metrics, task, and prompt dataset to be used in your model evaluation job.

      • (dict) --

        Defines the built-in prompt datasets, built-in metric names and custom metric names, and the task type.

        • taskType (string) -- [REQUIRED]

          The task type you want the model to carry out.

        • dataset (dict) -- [REQUIRED]

          Specifies the prompt dataset.

          • name (string) -- [REQUIRED]

            Used to specify supported built-in prompt datasets. Valid values are Builtin.Bold, Builtin.BoolQ, Builtin.NaturalQuestions, Builtin.Gigaword, Builtin.RealToxicityPrompts, Builtin.TriviaQa, Builtin.T-Rex, Builtin.WomensEcommerceClothingReviews and Builtin.Wikitext2.

          • datasetLocation (dict) --

            For custom prompt datasets, you must specify the location in Amazon S3 where the prompt dataset is saved.

            • s3Uri (string) --

              The S3 URI of the S3 bucket specified in the job.

        • metricNames (list) -- [REQUIRED]

          The names of the metrics used. For automated model evaluation jobs valid values are "Builtin.Accuracy", "Builtin.Robustness", and "Builtin.Toxicity". In human-based model evaluation jobs the array of strings must match the name parameter specified in HumanEvaluationCustomMetric.

          • (string) --

type inferenceConfig:

dict

param inferenceConfig:

[REQUIRED]

Specify the models you want to use in your model evaluation job. Automatic model evaluation jobs support a single model, and model evaluation job that use human workers support two models.

  • models (list) --

    Used to specify the models.

    • (dict) --

      Defines the models used in the model evaluation job.

      • bedrockModel (dict) --

        Defines the Amazon Bedrock model and inference parameters you want used.

        • modelIdentifier (string) -- [REQUIRED]

          The ARN of the Amazon Bedrock model specified.

        • inferenceParams (string) -- [REQUIRED]

          Each Amazon Bedrock support different inference parameters that change how the model behaves during inference.

type outputDataConfig:

dict

param outputDataConfig:

[REQUIRED]

An object that defines where the results of model evaluation job will be saved in Amazon S3.

  • s3Uri (string) -- [REQUIRED]

    The Amazon S3 URI where the results of model evaluation job are saved.

rtype:

dict

returns:

Response Syntax

{
    'jobArn': 'string'
}

Response Structure

  • (dict) --

    • jobArn (string) --

      The ARN of the model evaluation job.

CreateGuardrailVersion (new) Link ¶

Creates a version of the guardrail. Use this API to create a snapshot of the guardrail when you are satisfied with a configuration, or to compare the configuration with another version.

See also: AWS API Documentation

Request Syntax

client.create_guardrail_version(
    guardrailIdentifier='string',
    description='string',
    clientRequestToken='string'
)
type guardrailIdentifier:

string

param guardrailIdentifier:

[REQUIRED]

The unique identifier of the guardrail.

type description:

string

param description:

A description of the guardrail version.

type clientRequestToken:

string

param clientRequestToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than once. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.

This field is autopopulated if not provided.

rtype:

dict

returns:

Response Syntax

{
    'guardrailId': 'string',
    'version': 'string'
}

Response Structure

  • (dict) --

    • guardrailId (string) --

      The unique identifier of the guardrail.

    • version (string) --

      The number of the version of the guardrail.

DeleteGuardrail (new) Link ¶

Deletes a guardrail.

  • To delete a guardrail, only specify the ARN of the guardrail in the guardrailIdentifier field. If you delete a guardrail, all of its versions will be deleted.

  • To delete a version of a guardrail, specify the ARN of the guardrail in the guardrailIdentifier field and the version in the guardrailVersion field.

See also: AWS API Documentation

Request Syntax

client.delete_guardrail(
    guardrailIdentifier='string',
    guardrailVersion='string'
)
type guardrailIdentifier:

string

param guardrailIdentifier:

[REQUIRED]

The unique identifier of the guardrail.

type guardrailVersion:

string

param guardrailVersion:

The version of the guardrail.

rtype:

dict

returns:

Response Syntax

{}

Response Structure

  • (dict) --

CreateGuardrail (new) Link ¶

Creates a guardrail to block topics and to filter out harmful content.

  • Specify a name and optional description.

  • Specify messages for when the guardrail successfully blocks a prompt or a model response in the blockedInputMessaging and blockedOutputsMessaging fields.

  • Specify topics for the guardrail to deny in the topicPolicyConfig object. Each GuardrailTopicConfig object in the topicsConfig list pertains to one topic.

    • Give a name and description so that the guardrail can properly identify the topic.

    • Specify DENY in the type field.

    • (Optional) Provide up to five prompts that you would categorize as belonging to the topic in the examples list.

  • Specify filter strengths for the harmful categories defined in Amazon Bedrock in the contentPolicyConfig object. Each GuardrailContentFilterConfig object in the filtersConfig list pertains to a harmful category. For more information, see Content filters. For more information about the fields in a content filter, see GuardrailContentFilterConfig.

    • Specify the category in the type field.

    • Specify the strength of the filter for prompts in the inputStrength field and for model responses in the strength field of the GuardrailContentFilterConfig.

  • (Optional) For security, include the ARN of a KMS key in the kmsKeyId field.

  • (Optional) Attach any tags to the guardrail in the tags object. For more information, see Tag resources.

See also: AWS API Documentation

Request Syntax

client.create_guardrail(
    name='string',
    description='string',
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'string',
                'definition': 'string',
                'examples': [
                    'string',
                ],
                'type': 'DENY'
            },
        ]
    },
    contentPolicyConfig={
        'filtersConfig': [
            {
                'type': 'SEXUAL'|'VIOLENCE'|'HATE'|'INSULTS'|'MISCONDUCT'|'PROMPT_ATTACK',
                'inputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH',
                'outputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH'
            },
        ]
    },
    wordPolicyConfig={
        'wordsConfig': [
            {
                'text': 'string'
            },
        ],
        'managedWordListsConfig': [
            {
                'type': 'PROFANITY'
            },
        ]
    },
    sensitiveInformationPolicyConfig={
        'piiEntitiesConfig': [
            {
                'type': 'ADDRESS'|'AGE'|'AWS_ACCESS_KEY'|'AWS_SECRET_KEY'|'CA_HEALTH_NUMBER'|'CA_SOCIAL_INSURANCE_NUMBER'|'CREDIT_DEBIT_CARD_CVV'|'CREDIT_DEBIT_CARD_EXPIRY'|'CREDIT_DEBIT_CARD_NUMBER'|'DRIVER_ID'|'EMAIL'|'INTERNATIONAL_BANK_ACCOUNT_NUMBER'|'IP_ADDRESS'|'LICENSE_PLATE'|'MAC_ADDRESS'|'NAME'|'PASSWORD'|'PHONE'|'PIN'|'SWIFT_CODE'|'UK_NATIONAL_HEALTH_SERVICE_NUMBER'|'UK_NATIONAL_INSURANCE_NUMBER'|'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'|'URL'|'USERNAME'|'US_BANK_ACCOUNT_NUMBER'|'US_BANK_ROUTING_NUMBER'|'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'|'US_PASSPORT_NUMBER'|'US_SOCIAL_SECURITY_NUMBER'|'VEHICLE_IDENTIFICATION_NUMBER',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ],
        'regexesConfig': [
            {
                'name': 'string',
                'description': 'string',
                'pattern': 'string',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ]
    },
    blockedInputMessaging='string',
    blockedOutputsMessaging='string',
    kmsKeyId='string',
    tags=[
        {
            'key': 'string',
            'value': 'string'
        },
    ],
    clientRequestToken='string'
)
type name:

string

param name:

[REQUIRED]

The name to give the guardrail.

type description:

string

param description:

A description of the guardrail.

type topicPolicyConfig:

dict

param topicPolicyConfig:

The topic policies to configure for the guardrail.

  • topicsConfig (list) -- [REQUIRED]

    A list of policies related to topics that the guardrail should deny.

    • (dict) --

      Details about topics for the guardrail to identify and deny.

      This data type is used in the following API operations:

      • name (string) -- [REQUIRED]

        The name of the topic to deny.

      • definition (string) -- [REQUIRED]

        A definition of the topic to deny.

      • examples (list) --

        A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

        • (string) --

      • type (string) -- [REQUIRED]

        Specifies to deny the topic.

type contentPolicyConfig:

dict

param contentPolicyConfig:

The content filter policies to configure for the guardrail.

  • filtersConfig (list) -- [REQUIRED]

    Contains the type of the content filter and how strongly it should apply to prompts and model responses.

    • (dict) --

      Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.

      • Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).

      • Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.

      • Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.

      • Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

      Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.

      For more information, see Guardrails content filters.

      This data type is used in the following API operations:

      • type (string) -- [REQUIRED]

        The harmful category that the content filter is applied to.

      • inputStrength (string) -- [REQUIRED]

        The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

      • outputStrength (string) -- [REQUIRED]

        The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

type wordPolicyConfig:

dict

param wordPolicyConfig:

The word policy you configure for the guardrail.

  • wordsConfig (list) --

    A list of words to configure for the guardrail.

    • (dict) --

      A word to configure for the guardrail.

      • text (string) -- [REQUIRED]

        Text of the word configured for the guardrail to block.

  • managedWordListsConfig (list) --

    A list of managed words to configure for the guardrail.

    • (dict) --

      The managed word list to configure for the guardrail.

      • type (string) -- [REQUIRED]

        The managed word type to configure for the guardrail.

type sensitiveInformationPolicyConfig:

dict

param sensitiveInformationPolicyConfig:

The sensitive information policy to configure for the guardrail.

  • piiEntitiesConfig (list) --

    A list of PII entities to configure to the guardrail.

    • (dict) --

      The PII entity to configure for the guardrail.

      • type (string) -- [REQUIRED]

        Configure guardrail type when the PII entity is detected.

      • action (string) -- [REQUIRED]

        Configure guardrail action when the PII entity is detected.

  • regexesConfig (list) --

    A list of regular expressions to configure to the guardrail.

    • (dict) --

      The regular expression to configure for the guardrail.

      • name (string) -- [REQUIRED]

        The name of the regular expression to configure for the guardrail.

      • description (string) --

        The description of the regular expression to configure for the guardrail.

      • pattern (string) -- [REQUIRED]

        The regular expression pattern to configure for the guardrail.

      • action (string) -- [REQUIRED]

        The guardrail action to configure when matching regular expression is detected.

type blockedInputMessaging:

string

param blockedInputMessaging:

[REQUIRED]

The message to return when the guardrail blocks a prompt.

type blockedOutputsMessaging:

string

param blockedOutputsMessaging:

[REQUIRED]

The message to return when the guardrail blocks a model response.

type kmsKeyId:

string

param kmsKeyId:

The ARN of the KMS key that you use to encrypt the guardrail.

type tags:

list

param tags:

The tags that you want to attach to the guardrail.

  • (dict) --

    Definition of the key/value pair for a tag.

    • key (string) -- [REQUIRED]

      Key for the tag.

    • value (string) -- [REQUIRED]

      Value for the tag.

type clientRequestToken:

string

param clientRequestToken:

A unique, case-sensitive identifier to ensure that the API request completes no more than once. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.

This field is autopopulated if not provided.

rtype:

dict

returns:

Response Syntax

{
    'guardrailId': 'string',
    'guardrailArn': 'string',
    'version': 'string',
    'createdAt': datetime(2015, 1, 1)
}

Response Structure

  • (dict) --

    • guardrailId (string) --

      The unique identifier of the guardrail that was created.

    • guardrailArn (string) --

      The ARN of the guardrail that was created.

    • version (string) --

      The version of the guardrail that was created. This value should be 1.

    • createdAt (datetime) --

      The time at which the guardrail was created.

ListGuardrails (new) Link ¶

Lists details about all the guardrails in an account. To list the DRAFT version of all your guardrails, don't specify the guardrailIdentifier field. To list all versions of a guardrail, specify the ARN of the guardrail in the guardrailIdentifier field.

You can set the maximum number of results to return in a response in the maxResults field. If there are more results than the number you set, the response returns a nextToken that you can send in another ListGuardrails request to see the next batch of results.

See also: AWS API Documentation

Request Syntax

client.list_guardrails(
    guardrailIdentifier='string',
    maxResults=123,
    nextToken='string'
)
type guardrailIdentifier:

string

param guardrailIdentifier:

The unique identifier of the guardrail.

type maxResults:

integer

param maxResults:

The maximum number of results to return in the response.

type nextToken:

string

param nextToken:

If there are more results than were returned in the response, the response returns a nextToken that you can send in another ListGuardrails request to see the next batch of results.

rtype:

dict

returns:

Response Syntax

{
    'guardrails': [
        {
            'id': 'string',
            'arn': 'string',
            'status': 'CREATING'|'UPDATING'|'VERSIONING'|'READY'|'FAILED'|'DELETING',
            'name': 'string',
            'description': 'string',
            'version': 'string',
            'createdAt': datetime(2015, 1, 1),
            'updatedAt': datetime(2015, 1, 1)
        },
    ],
    'nextToken': 'string'
}

Response Structure

  • (dict) --

    • guardrails (list) --

      A list of objects, each of which contains details about a guardrail.

      • (dict) --

        Contains details about a guardrail.

        This data type is used in the following API operations:

        • id (string) --

          The unique identifier of the guardrail.

        • arn (string) --

          The ARN of the guardrail.

        • status (string) --

          The status of the guardrail.

        • name (string) --

          The name of the guardrail.

        • description (string) --

          A description of the guardrail.

        • version (string) --

          The version of the guardrail.

        • createdAt (datetime) --

          The date and time at which the guardrail was created.

        • updatedAt (datetime) --

          The date and time at which the guardrail was last updated.

    • nextToken (string) --

      If there are more results than were returned in the response, the response returns a nextToken that you can send in another ListGuardrails request to see the next batch of results.

GetGuardrail (new) Link ¶

Gets details about a guardrail. If you don't specify a version, the response returns details for the DRAFT version.

See also: AWS API Documentation

Request Syntax

client.get_guardrail(
    guardrailIdentifier='string',
    guardrailVersion='string'
)
type guardrailIdentifier:

string

param guardrailIdentifier:

[REQUIRED]

The unique identifier of the guardrail for which to get details.

type guardrailVersion:

string

param guardrailVersion:

The version of the guardrail for which to get details. If you don't specify a version, the response returns details for the DRAFT version.

rtype:

dict

returns:

Response Syntax

{
    'name': 'string',
    'description': 'string',
    'guardrailId': 'string',
    'guardrailArn': 'string',
    'version': 'string',
    'status': 'CREATING'|'UPDATING'|'VERSIONING'|'READY'|'FAILED'|'DELETING',
    'topicPolicy': {
        'topics': [
            {
                'name': 'string',
                'definition': 'string',
                'examples': [
                    'string',
                ],
                'type': 'DENY'
            },
        ]
    },
    'contentPolicy': {
        'filters': [
            {
                'type': 'SEXUAL'|'VIOLENCE'|'HATE'|'INSULTS'|'MISCONDUCT'|'PROMPT_ATTACK',
                'inputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH',
                'outputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH'
            },
        ]
    },
    'wordPolicy': {
        'words': [
            {
                'text': 'string'
            },
        ],
        'managedWordLists': [
            {
                'type': 'PROFANITY'
            },
        ]
    },
    'sensitiveInformationPolicy': {
        'piiEntities': [
            {
                'type': 'ADDRESS'|'AGE'|'AWS_ACCESS_KEY'|'AWS_SECRET_KEY'|'CA_HEALTH_NUMBER'|'CA_SOCIAL_INSURANCE_NUMBER'|'CREDIT_DEBIT_CARD_CVV'|'CREDIT_DEBIT_CARD_EXPIRY'|'CREDIT_DEBIT_CARD_NUMBER'|'DRIVER_ID'|'EMAIL'|'INTERNATIONAL_BANK_ACCOUNT_NUMBER'|'IP_ADDRESS'|'LICENSE_PLATE'|'MAC_ADDRESS'|'NAME'|'PASSWORD'|'PHONE'|'PIN'|'SWIFT_CODE'|'UK_NATIONAL_HEALTH_SERVICE_NUMBER'|'UK_NATIONAL_INSURANCE_NUMBER'|'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'|'URL'|'USERNAME'|'US_BANK_ACCOUNT_NUMBER'|'US_BANK_ROUTING_NUMBER'|'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'|'US_PASSPORT_NUMBER'|'US_SOCIAL_SECURITY_NUMBER'|'VEHICLE_IDENTIFICATION_NUMBER',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ],
        'regexes': [
            {
                'name': 'string',
                'description': 'string',
                'pattern': 'string',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ]
    },
    'createdAt': datetime(2015, 1, 1),
    'updatedAt': datetime(2015, 1, 1),
    'statusReasons': [
        'string',
    ],
    'failureRecommendations': [
        'string',
    ],
    'blockedInputMessaging': 'string',
    'blockedOutputsMessaging': 'string',
    'kmsKeyArn': 'string'
}

Response Structure

  • (dict) --

    • name (string) --

      The name of the guardrail.

    • description (string) --

      The description of the guardrail.

    • guardrailId (string) --

      The unique identifier of the guardrail.

    • guardrailArn (string) --

      The ARN of the guardrail that was created.

    • version (string) --

      The version of the guardrail.

    • status (string) --

      The status of the guardrail.

    • topicPolicy (dict) --

      The topic policy that was configured for the guardrail.

      • topics (list) --

        A list of policies related to topics that the guardrail should deny.

        • (dict) --

          Details about topics for the guardrail to identify and deny.

          This data type is used in the following API operations:

          • name (string) --

            The name of the topic to deny.

          • definition (string) --

            A definition of the topic to deny.

          • examples (list) --

            A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

            • (string) --

          • type (string) --

            Specifies to deny the topic.

    • contentPolicy (dict) --

      The content policy that was configured for the guardrail.

      • filters (list) --

        Contains the type of the content filter and how strongly it should apply to prompts and model responses.

        • (dict) --

          Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.

          • Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).

          • Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.

          • Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.

          • Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

          Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.

          For more information, see Guardrails content filters.

          This data type is used in the following API operations:

          • type (string) --

            The harmful category that the content filter is applied to.

          • inputStrength (string) --

            The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

          • outputStrength (string) --

            The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

    • wordPolicy (dict) --

      The word policy that was configured for the guardrail.

      • words (list) --

        A list of words configured for the guardrail.

        • (dict) --

          A word configured for the guardrail.

          • text (string) --

            Text of the word configured for the guardrail to block.

      • managedWordLists (list) --

        A list of managed words configured for the guardrail.

        • (dict) --

          The managed word list that was configured for the guardrail. (This is a list of words that are pre-defined and managed by Guardrails only.)

          • type (string) --

            ManagedWords$type The managed word type that was configured for the guardrail. (For now, we only offer profanity word list)

    • sensitiveInformationPolicy (dict) --

      The sensitive information policy that was configured for the guardrail.

      • piiEntities (list) --

        The list of PII entities configured for the guardrail.

        • (dict) --

          The PII entity configured for the guardrail.

          • type (string) --

            The type of PII entity. For example, Social Security Number.

          • action (string) --

            The configured guardrail action when PII entity is detected.

      • regexes (list) --

        The list of regular expressions configured for the guardrail.

        • (dict) --

          The regular expression configured for the guardrail.

          • name (string) --

            The name of the regular expression for the guardrail.

          • description (string) --

            The description of the regular expression for the guardrail.

          • pattern (string) --

            The pattern of the regular expression configured for the guardrail.

          • action (string) --

            The action taken when a match to the regular expression is detected.

    • createdAt (datetime) --

      The date and time at which the guardrail was created.

    • updatedAt (datetime) --

      The date and time at which the guardrail was updated.

    • statusReasons (list) --

      Appears if the status is FAILED. A list of reasons for why the guardrail failed to be created, updated, versioned, or deleted.

      • (string) --

    • failureRecommendations (list) --

      Appears if the status of the guardrail is FAILED. A list of recommendations to carry out before retrying the request.

      • (string) --

    • blockedInputMessaging (string) --

      The message that the guardrail returns when it blocks a prompt.

    • blockedOutputsMessaging (string) --

      The message that the guardrail returns when it blocks a model response.

    • kmsKeyArn (string) --

      The ARN of the KMS key that encrypts the guardrail.

StopEvaluationJob (new) Link ¶

Stops an in progress model evaluation job.

See also: AWS API Documentation

Request Syntax

client.stop_evaluation_job(
    jobIdentifier='string'
)
type jobIdentifier:

string

param jobIdentifier:

[REQUIRED]

The ARN of the model evaluation job you want to stop.

rtype:

dict

returns:

Response Syntax

{}

Response Structure

  • (dict) --

UpdateGuardrail (new) Link ¶

Updates a guardrail with the values you specify.

  • Specify a name and optional description.

  • Specify messages for when the guardrail successfully blocks a prompt or a model response in the blockedInputMessaging and blockedOutputsMessaging fields.

  • Specify topics for the guardrail to deny in the topicPolicyConfig object. Each GuardrailTopicConfig object in the topicsConfig list pertains to one topic.

    • Give a name and description so that the guardrail can properly identify the topic.

    • Specify DENY in the type field.

    • (Optional) Provide up to five prompts that you would categorize as belonging to the topic in the examples list.

  • Specify filter strengths for the harmful categories defined in Amazon Bedrock in the contentPolicyConfig object. Each GuardrailContentFilterConfig object in the filtersConfig list pertains to a harmful category. For more information, see Content filters. For more information about the fields in a content filter, see GuardrailContentFilterConfig.

    • Specify the category in the type field.

    • Specify the strength of the filter for prompts in the inputStrength field and for model responses in the strength field of the GuardrailContentFilterConfig.

  • (Optional) For security, include the ARN of a KMS key in the kmsKeyId field.

  • (Optional) Attach any tags to the guardrail in the tags object. For more information, see Tag resources.

See also: AWS API Documentation

Request Syntax

client.update_guardrail(
    guardrailIdentifier='string',
    name='string',
    description='string',
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'string',
                'definition': 'string',
                'examples': [
                    'string',
                ],
                'type': 'DENY'
            },
        ]
    },
    contentPolicyConfig={
        'filtersConfig': [
            {
                'type': 'SEXUAL'|'VIOLENCE'|'HATE'|'INSULTS'|'MISCONDUCT'|'PROMPT_ATTACK',
                'inputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH',
                'outputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH'
            },
        ]
    },
    wordPolicyConfig={
        'wordsConfig': [
            {
                'text': 'string'
            },
        ],
        'managedWordListsConfig': [
            {
                'type': 'PROFANITY'
            },
        ]
    },
    sensitiveInformationPolicyConfig={
        'piiEntitiesConfig': [
            {
                'type': 'ADDRESS'|'AGE'|'AWS_ACCESS_KEY'|'AWS_SECRET_KEY'|'CA_HEALTH_NUMBER'|'CA_SOCIAL_INSURANCE_NUMBER'|'CREDIT_DEBIT_CARD_CVV'|'CREDIT_DEBIT_CARD_EXPIRY'|'CREDIT_DEBIT_CARD_NUMBER'|'DRIVER_ID'|'EMAIL'|'INTERNATIONAL_BANK_ACCOUNT_NUMBER'|'IP_ADDRESS'|'LICENSE_PLATE'|'MAC_ADDRESS'|'NAME'|'PASSWORD'|'PHONE'|'PIN'|'SWIFT_CODE'|'UK_NATIONAL_HEALTH_SERVICE_NUMBER'|'UK_NATIONAL_INSURANCE_NUMBER'|'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'|'URL'|'USERNAME'|'US_BANK_ACCOUNT_NUMBER'|'US_BANK_ROUTING_NUMBER'|'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'|'US_PASSPORT_NUMBER'|'US_SOCIAL_SECURITY_NUMBER'|'VEHICLE_IDENTIFICATION_NUMBER',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ],
        'regexesConfig': [
            {
                'name': 'string',
                'description': 'string',
                'pattern': 'string',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ]
    },
    blockedInputMessaging='string',
    blockedOutputsMessaging='string',
    kmsKeyId='string'
)
type guardrailIdentifier:

string

param guardrailIdentifier:

[REQUIRED]

The unique identifier of the guardrail

type name:

string

param name:

[REQUIRED]

A name for the guardrail.

type description:

string

param description:

A description of the guardrail.

type topicPolicyConfig:

dict

param topicPolicyConfig:

The topic policy to configure for the guardrail.

  • topicsConfig (list) -- [REQUIRED]

    A list of policies related to topics that the guardrail should deny.

    • (dict) --

      Details about topics for the guardrail to identify and deny.

      This data type is used in the following API operations:

      • name (string) -- [REQUIRED]

        The name of the topic to deny.

      • definition (string) -- [REQUIRED]

        A definition of the topic to deny.

      • examples (list) --

        A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

        • (string) --

      • type (string) -- [REQUIRED]

        Specifies to deny the topic.

type contentPolicyConfig:

dict

param contentPolicyConfig:

The content policy to configure for the guardrail.

  • filtersConfig (list) -- [REQUIRED]

    Contains the type of the content filter and how strongly it should apply to prompts and model responses.

    • (dict) --

      Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.

      • Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).

      • Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.

      • Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.

      • Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

      Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.

      For more information, see Guardrails content filters.

      This data type is used in the following API operations:

      • type (string) -- [REQUIRED]

        The harmful category that the content filter is applied to.

      • inputStrength (string) -- [REQUIRED]

        The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

      • outputStrength (string) -- [REQUIRED]

        The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

type wordPolicyConfig:

dict

param wordPolicyConfig:

The word policy to configure for the guardrail.

  • wordsConfig (list) --

    A list of words to configure for the guardrail.

    • (dict) --

      A word to configure for the guardrail.

      • text (string) -- [REQUIRED]

        Text of the word configured for the guardrail to block.

  • managedWordListsConfig (list) --

    A list of managed words to configure for the guardrail.

    • (dict) --

      The managed word list to configure for the guardrail.

      • type (string) -- [REQUIRED]

        The managed word type to configure for the guardrail.

type sensitiveInformationPolicyConfig:

dict

param sensitiveInformationPolicyConfig:

The sensitive information policy to configure for the guardrail.

  • piiEntitiesConfig (list) --

    A list of PII entities to configure to the guardrail.

    • (dict) --

      The PII entity to configure for the guardrail.

      • type (string) -- [REQUIRED]

        Configure guardrail type when the PII entity is detected.

      • action (string) -- [REQUIRED]

        Configure guardrail action when the PII entity is detected.

  • regexesConfig (list) --

    A list of regular expressions to configure to the guardrail.

    • (dict) --

      The regular expression to configure for the guardrail.

      • name (string) -- [REQUIRED]

        The name of the regular expression to configure for the guardrail.

      • description (string) --

        The description of the regular expression to configure for the guardrail.

      • pattern (string) -- [REQUIRED]

        The regular expression pattern to configure for the guardrail.

      • action (string) -- [REQUIRED]

        The guardrail action to configure when matching regular expression is detected.

type blockedInputMessaging:

string

param blockedInputMessaging:

[REQUIRED]

The message to return when the guardrail blocks a prompt.

type blockedOutputsMessaging:

string

param blockedOutputsMessaging:

[REQUIRED]

The message to return when the guardrail blocks a model response.

type kmsKeyId:

string

param kmsKeyId:

The ARN of the KMS key with which to encrypt the guardrail.

rtype:

dict

returns:

Response Syntax

{
    'guardrailId': 'string',
    'guardrailArn': 'string',
    'version': 'string',
    'updatedAt': datetime(2015, 1, 1)
}

Response Structure

  • (dict) --

    • guardrailId (string) --

      The unique identifier of the guardrail

    • guardrailArn (string) --

      The ARN of the guardrail that was created.

    • version (string) --

      The version of the guardrail.

    • updatedAt (datetime) --

      The date and time at which the guardrail was updated.