Amazon Bedrock

2024/08/27 - Amazon Bedrock - 2 new api methods

Changes  Amazon Bedrock SDK updates for Inference Profile.

GetInferenceProfile (new) Link ¶

Gets information about an inference profile. For more information, see the Amazon Bedrock User Guide.

See also: AWS API Documentation

Request Syntax

client.get_inference_profile(
    inferenceProfileIdentifier='string'
)
type inferenceProfileIdentifier:

string

param inferenceProfileIdentifier:

[REQUIRED]

The unique identifier of the inference profile.

rtype:

dict

returns:

Response Syntax

{
    'inferenceProfileName': 'string',
    'models': [
        {
            'modelArn': 'string'
        },
    ],
    'description': 'string',
    'createdAt': datetime(2015, 1, 1),
    'updatedAt': datetime(2015, 1, 1),
    'inferenceProfileArn': 'string',
    'inferenceProfileId': 'string',
    'status': 'ACTIVE',
    'type': 'SYSTEM_DEFINED'
}

Response Structure

  • (dict) --

    • inferenceProfileName (string) --

      The name of the inference profile.

    • models (list) --

      A list of information about each model in the inference profile.

      • (dict) --

        Contains information about a model.

        • modelArn (string) --

          The Amazon Resource Name (ARN) of the model.

    • description (string) --

      The description of the inference profile.

    • createdAt (datetime) --

      The time at which the inference profile was created.

    • updatedAt (datetime) --

      The time at which the inference profile was last updated.

    • inferenceProfileArn (string) --

      The Amazon Resource Name (ARN) of the inference profile.

    • inferenceProfileId (string) --

      The unique identifier of the inference profile.

    • status (string) --

      The status of the inference profile. ACTIVE means that the inference profile is available to use.

    • type (string) --

      The type of the inference profile. SYSTEM_DEFINED means that the inference profile is defined by Amazon Bedrock.

ListInferenceProfiles (new) Link ¶

Returns a list of inference profiles that you can use.

See also: AWS API Documentation

Request Syntax

client.list_inference_profiles(
    maxResults=123,
    nextToken='string'
)
type maxResults:

integer

param maxResults:

The maximum number of results to return in the response. If the total number of results is greater than this value, use the token returned in the response in the nextToken field when making another request to return the next batch of results.

type nextToken:

string

param nextToken:

If the total number of results is greater than the maxResults value provided in the request, enter the token returned in the nextToken field in the response in this field to return the next batch of results.

rtype:

dict

returns:

Response Syntax

{
    'inferenceProfileSummaries': [
        {
            'inferenceProfileName': 'string',
            'models': [
                {
                    'modelArn': 'string'
                },
            ],
            'description': 'string',
            'createdAt': datetime(2015, 1, 1),
            'updatedAt': datetime(2015, 1, 1),
            'inferenceProfileArn': 'string',
            'inferenceProfileId': 'string',
            'status': 'ACTIVE',
            'type': 'SYSTEM_DEFINED'
        },
    ],
    'nextToken': 'string'
}

Response Structure

  • (dict) --

    • inferenceProfileSummaries (list) --

      A list of information about each inference profile that you can use.

      • (dict) --

        Contains information about an inference profile.

        • inferenceProfileName (string) --

          The name of the inference profile.

        • models (list) --

          A list of information about each model in the inference profile.

          • (dict) --

            Contains information about a model.

            • modelArn (string) --

              The Amazon Resource Name (ARN) of the model.

        • description (string) --

          The description of the inference profile.

        • createdAt (datetime) --

          The time at which the inference profile was created.

        • updatedAt (datetime) --

          The time at which the inference profile was last updated.

        • inferenceProfileArn (string) --

          The Amazon Resource Name (ARN) of the inference profile.

        • inferenceProfileId (string) --

          The unique identifier of the inference profile.

        • status (string) --

          The status of the inference profile. ACTIVE means that the inference profile is available to use.

        • type (string) --

          The type of the inference profile. SYSTEM_DEFINED means that the inference profile is defined by Amazon Bedrock.

    • nextToken (string) --

      If the total number of results is greater than the maxResults value provided in the request, use this token when making another request in the nextToken field to return the next batch of results.