2025/10/24 - Amazon SageMaker Service - 3 updated api methods
Changes Added inference components model data caching feature
{'Specification': {'DataCacheConfig': {'EnableCaching': 'boolean'}}}
Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
See also: AWS API Documentation
Request Syntax
client.create_inference_component(
InferenceComponentName='string',
EndpointName='string',
VariantName='string',
Specification={
'ModelName': 'string',
'Container': {
'Image': 'string',
'ArtifactUrl': 'string',
'Environment': {
'string': 'string'
}
},
'StartupParameters': {
'ModelDataDownloadTimeoutInSeconds': 123,
'ContainerStartupHealthCheckTimeoutInSeconds': 123
},
'ComputeResourceRequirements': {
'NumberOfCpuCoresRequired': ...,
'NumberOfAcceleratorDevicesRequired': ...,
'MinMemoryRequiredInMb': 123,
'MaxMemoryRequiredInMb': 123
},
'BaseInferenceComponentName': 'string',
'DataCacheConfig': {
'EnableCaching': True|False
}
},
RuntimeConfig={
'CopyCount': 123
},
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
string
[REQUIRED]
A unique name to assign to the inference component.
string
[REQUIRED]
The name of an existing endpoint where you host the inference component.
string
The name of an existing production variant where you host the inference component.
dict
[REQUIRED]
Details about the resources to deploy with this inference component, including the model, container, and compute resources.
ModelName (string) --
The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
Container (dict) --
Defines a container that provides the runtime environment for a model that you deploy with an inference component.
Image (string) --
The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
ArtifactUrl (string) --
The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
Environment (dict) --
The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
(string) --
(string) --
StartupParameters (dict) --
Settings that take effect while the model container starts up.
ModelDataDownloadTimeoutInSeconds (integer) --
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
ContainerStartupHealthCheckTimeoutInSeconds (integer) --
The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
ComputeResourceRequirements (dict) --
The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
NumberOfCpuCoresRequired (float) --
The number of CPU cores to allocate to run a model that you assign to an inference component.
NumberOfAcceleratorDevicesRequired (float) --
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
MinMemoryRequiredInMb (integer) -- [REQUIRED]
The minimum MB of memory to allocate to run a model that you assign to an inference component.
MaxMemoryRequiredInMb (integer) --
The maximum MB of memory to allocate to run a model that you assign to an inference component.
BaseInferenceComponentName (string) --
The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the Container parameter to specify the location of the adapter artifacts. In the parameter value, use the ArtifactUrl parameter of the InferenceComponentContainerSpecification data type.
Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
DataCacheConfig (dict) --
Settings that affect how the inference component caches data.
EnableCaching (boolean) -- [REQUIRED]
Sets whether the endpoint that hosts the inference component caches the model artifacts and container image.
With caching enabled, the endpoint caches this data in each instance that it provisions for the inference component. That way, the inference component deploys faster during the auto scaling process. If caching isn't enabled, the inference component takes longer to deploy because of the time it spends downloading the data.
dict
Runtime settings for a model that is deployed with an inference component.
CopyCount (integer) -- [REQUIRED]
The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
list
A list of key-value pairs associated with the model. For more information, see Tagging Amazon Web Services resources in the Amazon Web Services General Reference.
(dict) --
A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.
You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.
For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.
Key (string) -- [REQUIRED]
The tag key. Tag keys must be unique per resource.
Value (string) -- [REQUIRED]
The tag value.
dict
Response Syntax
{
'InferenceComponentArn': 'string'
}
Response Structure
(dict) --
InferenceComponentArn (string) --
The Amazon Resource Name (ARN) of the inference component.
{'Specification': {'DataCacheConfig': {'EnableCaching': 'boolean'}}}
Returns information about an inference component.
See also: AWS API Documentation
Request Syntax
client.describe_inference_component(
InferenceComponentName='string'
)
string
[REQUIRED]
The name of the inference component.
dict
Response Syntax
{
'InferenceComponentName': 'string',
'InferenceComponentArn': 'string',
'EndpointName': 'string',
'EndpointArn': 'string',
'VariantName': 'string',
'FailureReason': 'string',
'Specification': {
'ModelName': 'string',
'Container': {
'DeployedImage': {
'SpecifiedImage': 'string',
'ResolvedImage': 'string',
'ResolutionTime': datetime(2015, 1, 1)
},
'ArtifactUrl': 'string',
'Environment': {
'string': 'string'
}
},
'StartupParameters': {
'ModelDataDownloadTimeoutInSeconds': 123,
'ContainerStartupHealthCheckTimeoutInSeconds': 123
},
'ComputeResourceRequirements': {
'NumberOfCpuCoresRequired': ...,
'NumberOfAcceleratorDevicesRequired': ...,
'MinMemoryRequiredInMb': 123,
'MaxMemoryRequiredInMb': 123
},
'BaseInferenceComponentName': 'string',
'DataCacheConfig': {
'EnableCaching': True|False
}
},
'RuntimeConfig': {
'DesiredCopyCount': 123,
'CurrentCopyCount': 123
},
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1),
'InferenceComponentStatus': 'InService'|'Creating'|'Updating'|'Failed'|'Deleting',
'LastDeploymentConfig': {
'RollingUpdatePolicy': {
'MaximumBatchSize': {
'Type': 'COPY_COUNT'|'CAPACITY_PERCENT',
'Value': 123
},
'WaitIntervalInSeconds': 123,
'MaximumExecutionTimeoutInSeconds': 123,
'RollbackMaximumBatchSize': {
'Type': 'COPY_COUNT'|'CAPACITY_PERCENT',
'Value': 123
}
},
'AutoRollbackConfiguration': {
'Alarms': [
{
'AlarmName': 'string'
},
]
}
}
}
Response Structure
(dict) --
InferenceComponentName (string) --
The name of the inference component.
InferenceComponentArn (string) --
The Amazon Resource Name (ARN) of the inference component.
EndpointName (string) --
The name of the endpoint that hosts the inference component.
EndpointArn (string) --
The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
VariantName (string) --
The name of the production variant that hosts the inference component.
FailureReason (string) --
If the inference component status is Failed, the reason for the failure.
Specification (dict) --
Details about the resources that are deployed with this inference component.
ModelName (string) --
The name of the SageMaker AI model object that is deployed with the inference component.
Container (dict) --
Details about the container that provides the runtime environment for the model that is deployed with the inference component.
DeployedImage (dict) --
Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant.
If you used the registry/repository[:tag] form to specify the image path of the primary container when you created the model hosted in this ProductionVariant, the path resolves to a path of the form registry/repository[@digest]. A digest is a hash value that identifies a specific version of an image. For information about Amazon ECR paths, see Pulling an Image in the Amazon ECR User Guide.
SpecifiedImage (string) --
The image path you specified when you created the model.
ResolvedImage (string) --
The specific digest path of the image hosted in this ProductionVariant.
ResolutionTime (datetime) --
The date and time when the image path for the model resolved to the ResolvedImage
ArtifactUrl (string) --
The Amazon S3 path where the model artifacts are stored.
Environment (dict) --
The environment variables to set in the Docker container.
(string) --
(string) --
StartupParameters (dict) --
Settings that take effect while the model container starts up.
ModelDataDownloadTimeoutInSeconds (integer) --
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
ContainerStartupHealthCheckTimeoutInSeconds (integer) --
The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
ComputeResourceRequirements (dict) --
The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
NumberOfCpuCoresRequired (float) --
The number of CPU cores to allocate to run a model that you assign to an inference component.
NumberOfAcceleratorDevicesRequired (float) --
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
MinMemoryRequiredInMb (integer) --
The minimum MB of memory to allocate to run a model that you assign to an inference component.
MaxMemoryRequiredInMb (integer) --
The maximum MB of memory to allocate to run a model that you assign to an inference component.
BaseInferenceComponentName (string) --
The name of the base inference component that contains this inference component.
DataCacheConfig (dict) --
Settings that affect how the inference component caches data.
EnableCaching (boolean) --
Indicates whether the inference component caches model artifacts as part of the auto scaling process.
RuntimeConfig (dict) --
Details about the runtime settings for the model that is deployed with the inference component.
DesiredCopyCount (integer) --
The number of runtime copies of the model container that you requested to deploy with the inference component.
CurrentCopyCount (integer) --
The number of runtime copies of the model container that are currently deployed.
CreationTime (datetime) --
The time when the inference component was created.
LastModifiedTime (datetime) --
The time when the inference component was last updated.
InferenceComponentStatus (string) --
The status of the inference component.
LastDeploymentConfig (dict) --
The deployment and rollback settings that you assigned to the inference component.
RollingUpdatePolicy (dict) --
Specifies a rolling deployment strategy for updating a SageMaker AI endpoint.
MaximumBatchSize (dict) --
The batch size for each rolling step in the deployment process. For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.
Type (string) --
Specifies the endpoint capacity type.
COPY_COUNT
The endpoint activates based on the number of inference component copies.
CAPACITY_PERCENT
The endpoint activates based on the specified percentage of capacity.
Value (integer) --
Defines the capacity size, either as a number of inference component copies or a capacity percentage.
WaitIntervalInSeconds (integer) --
The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.
MaximumExecutionTimeoutInSeconds (integer) --
The time limit for the total deployment. Exceeding this limit causes a timeout.
RollbackMaximumBatchSize (dict) --
The batch size for a rollback to the old endpoint fleet. If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.
Type (string) --
Specifies the endpoint capacity type.
COPY_COUNT
The endpoint activates based on the number of inference component copies.
CAPACITY_PERCENT
The endpoint activates based on the specified percentage of capacity.
Value (integer) --
Defines the capacity size, either as a number of inference component copies or a capacity percentage.
AutoRollbackConfiguration (dict) --
Automatic rollback configuration for handling endpoint deployment failures and recovery.
Alarms (list) --
List of CloudWatch alarms in your account that are configured to monitor metrics on an endpoint. If any alarms are tripped during a deployment, SageMaker rolls back the deployment.
(dict) --
An Amazon CloudWatch alarm configured to monitor metrics on an endpoint.
AlarmName (string) --
The name of a CloudWatch alarm in your account.
{'Specification': {'DataCacheConfig': {'EnableCaching': 'boolean'}}}
Updates an inference component.
See also: AWS API Documentation
Request Syntax
client.update_inference_component(
InferenceComponentName='string',
Specification={
'ModelName': 'string',
'Container': {
'Image': 'string',
'ArtifactUrl': 'string',
'Environment': {
'string': 'string'
}
},
'StartupParameters': {
'ModelDataDownloadTimeoutInSeconds': 123,
'ContainerStartupHealthCheckTimeoutInSeconds': 123
},
'ComputeResourceRequirements': {
'NumberOfCpuCoresRequired': ...,
'NumberOfAcceleratorDevicesRequired': ...,
'MinMemoryRequiredInMb': 123,
'MaxMemoryRequiredInMb': 123
},
'BaseInferenceComponentName': 'string',
'DataCacheConfig': {
'EnableCaching': True|False
}
},
RuntimeConfig={
'CopyCount': 123
},
DeploymentConfig={
'RollingUpdatePolicy': {
'MaximumBatchSize': {
'Type': 'COPY_COUNT'|'CAPACITY_PERCENT',
'Value': 123
},
'WaitIntervalInSeconds': 123,
'MaximumExecutionTimeoutInSeconds': 123,
'RollbackMaximumBatchSize': {
'Type': 'COPY_COUNT'|'CAPACITY_PERCENT',
'Value': 123
}
},
'AutoRollbackConfiguration': {
'Alarms': [
{
'AlarmName': 'string'
},
]
}
}
)
string
[REQUIRED]
The name of the inference component.
dict
Details about the resources to deploy with this inference component, including the model, container, and compute resources.
ModelName (string) --
The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
Container (dict) --
Defines a container that provides the runtime environment for a model that you deploy with an inference component.
Image (string) --
The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
ArtifactUrl (string) --
The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
Environment (dict) --
The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
(string) --
(string) --
StartupParameters (dict) --
Settings that take effect while the model container starts up.
ModelDataDownloadTimeoutInSeconds (integer) --
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
ContainerStartupHealthCheckTimeoutInSeconds (integer) --
The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.
ComputeResourceRequirements (dict) --
The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
NumberOfCpuCoresRequired (float) --
The number of CPU cores to allocate to run a model that you assign to an inference component.
NumberOfAcceleratorDevicesRequired (float) --
The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.
MinMemoryRequiredInMb (integer) -- [REQUIRED]
The minimum MB of memory to allocate to run a model that you assign to an inference component.
MaxMemoryRequiredInMb (integer) --
The maximum MB of memory to allocate to run a model that you assign to an inference component.
BaseInferenceComponentName (string) --
The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the Container parameter to specify the location of the adapter artifacts. In the parameter value, use the ArtifactUrl parameter of the InferenceComponentContainerSpecification data type.
Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
DataCacheConfig (dict) --
Settings that affect how the inference component caches data.
EnableCaching (boolean) -- [REQUIRED]
Sets whether the endpoint that hosts the inference component caches the model artifacts and container image.
With caching enabled, the endpoint caches this data in each instance that it provisions for the inference component. That way, the inference component deploys faster during the auto scaling process. If caching isn't enabled, the inference component takes longer to deploy because of the time it spends downloading the data.
dict
Runtime settings for a model that is deployed with an inference component.
CopyCount (integer) -- [REQUIRED]
The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
dict
The deployment configuration for the inference component. The configuration contains the desired deployment strategy and rollback settings.
RollingUpdatePolicy (dict) -- [REQUIRED]
Specifies a rolling deployment strategy for updating a SageMaker AI endpoint.
MaximumBatchSize (dict) -- [REQUIRED]
The batch size for each rolling step in the deployment process. For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.
Type (string) -- [REQUIRED]
Specifies the endpoint capacity type.
COPY_COUNT
The endpoint activates based on the number of inference component copies.
CAPACITY_PERCENT
The endpoint activates based on the specified percentage of capacity.
Value (integer) -- [REQUIRED]
Defines the capacity size, either as a number of inference component copies or a capacity percentage.
WaitIntervalInSeconds (integer) -- [REQUIRED]
The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.
MaximumExecutionTimeoutInSeconds (integer) --
The time limit for the total deployment. Exceeding this limit causes a timeout.
RollbackMaximumBatchSize (dict) --
The batch size for a rollback to the old endpoint fleet. If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.
Type (string) -- [REQUIRED]
Specifies the endpoint capacity type.
COPY_COUNT
The endpoint activates based on the number of inference component copies.
CAPACITY_PERCENT
The endpoint activates based on the specified percentage of capacity.
Value (integer) -- [REQUIRED]
Defines the capacity size, either as a number of inference component copies or a capacity percentage.
AutoRollbackConfiguration (dict) --
Automatic rollback configuration for handling endpoint deployment failures and recovery.
Alarms (list) --
List of CloudWatch alarms in your account that are configured to monitor metrics on an endpoint. If any alarms are tripped during a deployment, SageMaker rolls back the deployment.
(dict) --
An Amazon CloudWatch alarm configured to monitor metrics on an endpoint.
AlarmName (string) --
The name of a CloudWatch alarm in your account.
dict
Response Syntax
{
'InferenceComponentArn': 'string'
}
Response Structure
(dict) --
InferenceComponentArn (string) --
The Amazon Resource Name (ARN) of the inference component.