2025/09/30 - Amazon EC2 Container Service - 24 updated api methods
Changes This release adds support for Managed Instances on Amazon ECS.
{'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER | NONE'}}Response
{'capacityProvider': {'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER ' '| NONE'}, 'status': {'PROVISIONING', 'DEPROVISIONING'}, 'type': 'EC2_AUTOSCALING | MANAGED_INSTANCES | FARGATE | ' 'FARGATE_SPOT', 'updateStatus': {'CREATE_COMPLETE', 'CREATE_FAILED', 'CREATE_IN_PROGRESS'}}}
Creates a capacity provider. Capacity providers are associated with a cluster and are used in capacity provider strategies to facilitate cluster auto scaling. You can create capacity providers for Amazon ECS Managed Instances and EC2 instances. Fargate has the predefined FARGATE and FARGATE_SPOT capacity providers.
See also: AWS API Documentation
Request Syntax
client.create_capacity_provider( name='string', cluster='string', autoScalingGroupProvider={ 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, managedInstancesProvider={ 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' }, tags=[ { 'key': 'string', 'value': 'string' }, ] )
string
[REQUIRED]
The name of the capacity provider. Up to 255 characters are allowed. They include letters (both upper and lowercase letters), numbers, underscores (_), and hyphens (-). The name can't be prefixed with " aws", " ecs", or " fargate".
string
The name of the cluster to associate with the capacity provider. When you create a capacity provider with Amazon ECS Managed Instances, it becomes available only within the specified cluster.
dict
The details of the Auto Scaling group for the capacity provider.
autoScalingGroupArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
dict
The configuration for the Amazon ECS Managed Instances provider. This configuration specifies how Amazon ECS manages Amazon EC2 instances on your behalf, including the infrastructure role, instance launch template, and tag propagation settings.
infrastructureRoleArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the infrastructure role that Amazon ECS uses to manage instances on your behalf. This role must have permissions to launch, terminate, and manage Amazon EC2 instances, as well as access to other Amazon Web Services services required for Amazon ECS Managed Instances functionality.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) -- [REQUIRED]
The launch template configuration that specifies how Amazon ECS should launch Amazon EC2 instances. This includes the instance profile, network configuration, storage settings, and instance requirements for attribute-based instance type selection.
For more information, see Store instance launch parameters in Amazon EC2 launch templates in the Amazon EC2 User Guide.
ec2InstanceProfileArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the instance profile that Amazon ECS applies to Amazon ECS Managed Instances. This instance profile must include the necessary permissions for your tasks to access Amazon Web Services services and resources.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) -- [REQUIRED]
The network configuration for Amazon ECS Managed Instances. This specifies the subnets and security groups that instances use for network connectivity.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The storage configuration for Amazon ECS Managed Instances. This defines the root volume size and type for the instances.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
vCpuCount (dict) -- [REQUIRED]
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) -- [REQUIRED]
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) -- [REQUIRED]
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) -- [REQUIRED]
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
Specifies whether to propagate tags from the capacity provider to the Amazon ECS Managed Instances. When enabled, tags applied to the capacity provider are automatically applied to all instances launched by this provider.
list
The metadata that you apply to the capacity provider to categorize and organize them more conveniently. Each tag consists of a key and an optional value. You define both of them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
dict
Response Syntax
{ 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'cluster': 'string', 'status': 'PROVISIONING'|'ACTIVE'|'DEPROVISIONING'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'managedInstancesProvider': { 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' }, 'updateStatus': 'CREATE_IN_PROGRESS'|'CREATE_COMPLETE'|'CREATE_FAILED'|'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'type': 'EC2_AUTOSCALING'|'MANAGED_INSTANCES'|'FARGATE'|'FARGATE_SPOT' } }
Response Structure
(dict) --
capacityProvider (dict) --
The full description of the new capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
cluster (string) --
The cluster that this capacity provider is associated with. Managed instances capacity providers are cluster-scoped, meaning they can only be used within their associated cluster.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
managedInstancesProvider (dict) --
The configuration for the Amazon ECS Managed Instances provider. This includes the infrastructure role, the launch template configuration, and tag propagation settings.
infrastructureRoleArn (string) --
The Amazon Resource Name (ARN) of the infrastructure role that Amazon ECS assumes to manage instances. This role must include permissions for Amazon EC2 instance lifecycle management, networking, and any additional Amazon Web Services services required for your workloads.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) --
The launch template that defines how Amazon ECS launches Amazon ECS Managed Instances. This includes the instance profile for your tasks, network and storage configuration, and instance requirements that determine which Amazon EC2 instance types can be used.
For more information, see Store instance launch parameters in Amazon EC2 launch templates in the Amazon EC2 User Guide.
ec2InstanceProfileArn (string) --
The Amazon Resource Name (ARN) of the instance profile that Amazon ECS applies to Amazon ECS Managed Instances. This instance profile must include the necessary permissions for your tasks to access Amazon Web Services services and resources.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) --
The network configuration for Amazon ECS Managed Instances. This specifies the subnets and security groups that instances use for network connectivity.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The storage configuration for Amazon ECS Managed Instances. This defines the root volume size and type for the instances.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
vCpuCount (dict) --
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) --
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) --
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) --
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
Determines whether tags from the capacity provider are automatically applied to Amazon ECS Managed Instances. This helps with cost allocation and resource management by ensuring consistent tagging across your infrastructure.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
type (string) --
The type of capacity provider. For Amazon ECS Managed Instances, this value is MANAGED_INSTANCES, indicating that Amazon ECS manages the underlying Amazon EC2 instances on your behalf.
{'launchType': {'MANAGED_INSTANCES'}}Response
{'service': {'deployments': {'launchType': {'MANAGED_INSTANCES'}}, 'launchType': {'MANAGED_INSTANCES'}, 'taskSets': {'launchType': {'MANAGED_INSTANCES'}}}}
Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, use UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. volumeConfigurations is only supported for REPLICA service and not DAEMON service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Amazon ECS services in the Amazon Elastic Container Service Developer Guide.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies (which you can set in the “ strategy” field in “ deploymentConfiguration”): :
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. For more information, see Deploy Amazon ECS services by replacing tasks in the Amazon Elastic Container Service Developer Guide. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. For more information, see Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide
See also: AWS API Documentation
Request Syntax
client.create_service( cluster='string', serviceName='string', taskDefinition='string', availabilityZoneRebalancing='ENABLED'|'DISABLED', loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], desiredCount=123, clientToken='string', launchType='EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], platformVersion='string', role='string', deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, healthCheckGracePeriodSeconds=123, schedulingStrategy='REPLICA'|'DAEMON', deploymentController={ 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, tags=[ { 'key': 'string', 'value': 'string' }, ], enableECSManagedTags=True|False, propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', enableExecuteCommand=True|False, serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
string
The family and revision ( family:revision) or full ARN of the task definition to run in your service. If a revision isn't specified, the latest ACTIVE revision is used.
A task definition must be specified if the service uses either the ECS or CODE_DEPLOY deployment controllers.
For more information about deployment types, see Amazon ECS deployment types.
string
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
list
A load balancer object representing the load balancers to use with your service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
If the service uses the rolling update ( ECS) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service uses the CODE_DEPLOY deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair). During a deployment, CodeDeploy determines which task set in your service has the status PRIMARY, and it associates one target group with it. Then, it also associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that you can use to perform validation tests with Lambda functions before routing production traffic to it.
If you use the CODE_DEPLOY deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name, and the container port to access from the load balancer. The container name must be as it appears in a container definition. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group that's specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name , and the container port to access from the load balancer. The container name must be as it appears in a container definition. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer that's specified here.
Services with tasks that use the awsvpc network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers aren't supported. Also, when you create any target groups for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
list
The details of the service discovery registry to associate with this service. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
integer
The number of instantiations of the specified task definition to place and keep running in your service.
This is required if schedulingStrategy is REPLICA or isn't specified. If schedulingStrategy is DAEMON then this isn't required.
string
An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 36 ASCII characters in the range of 33-126 (inclusive) are allowed.
string
The infrastructure that you run your service on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
The FARGATE launch type runs your tasks on Fargate On-Demand infrastructure.
The EC2 launch type runs your tasks on Amazon EC2 instances registered to your cluster.
The EXTERNAL launch type runs your tasks on your on-premises server or virtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a launchType is specified, the capacityProviderStrategy parameter must be omitted.
list
The capacity provider strategy to use for the service.
If a capacityProviderStrategy is specified, the launchType parameter must be omitted. If no capacityProviderStrategy or launchType is specified, the defaultCapacityProviderStrategy for the cluster is used.
A capacity provider strategy can contain a maximum of 20 capacity providers.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
string
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
string
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition doesn't use the awsvpc network mode. If you specify the role parameter, you must also specify a load balancer object with the loadBalancers parameter.
If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/ then you would specify /foo/bar as the role name. For more information, see Friendly names and paths in the IAM User Guide.
dict
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) -- [REQUIRED]
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) -- [REQUIRED]
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) -- [REQUIRED]
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
list
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
dict
The network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the Amazon Elastic Container Service Developer Guide.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
integer
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of 0 is used. If you don't use any of the health checks, then healthCheckGracePeriodSeconds is unused.
If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
string
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service uses the CODE_DEPLOY or EXTERNAL deployment controller types.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that don't meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
dict
The deployment controller to use for the service. If no deployment controller is specified, the default value of ECS is used.
type (string) -- [REQUIRED]
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
list
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
boolean
Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you must set the propagateTags request parameter.
string
Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than NONE when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
The default is NONE.
boolean
Determines whether the execute command functionality is turned on for the service. If true, this enables execute command functionality on all containers in the service tasks.
dict
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) -- [REQUIRED]
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) -- [REQUIRED]
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) -- [REQUIRED]
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) -- [REQUIRED]
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) -- [REQUIRED]
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) -- [REQUIRED]
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) -- [REQUIRED]
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) -- [REQUIRED]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
list
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
list
The VPC Lattice configuration for the service being created.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) -- [REQUIRED]
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) -- [REQUIRED]
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } }Response Structure
(dict) --
service (dict) --
The full description of your service following the create call.
A service will return either a capacityProviderStrategy or launchType parameter, but not both, depending where one was specified when it was created.
If a service is using the ECS deployment controller, the deploymentController and taskSets parameters will not be returned.
if the service uses the CODE_DEPLOY deployment controller, the deploymentController, taskSets and deployments parameters will be returned, however the deployments parameter will be an empty list.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) --
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) --
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) --
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
availabilityZoneRebalancing (string) --
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
{'launchType': {'MANAGED_INSTANCES'}}Response
{'taskSet': {'launchType': {'MANAGED_INSTANCES'}}}
Create a task set in the specified cluster and service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
For information about the maximum number of task sets and other quotas, see Amazon ECS service quotas in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.create_task_set( service='string', cluster='string', externalId='string', taskDefinition='string', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], launchType='EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], platformVersion='string', scale={ 'value': 123.0, 'unit': 'PERCENT' }, clientToken='string', tags=[ { 'key': 'string', 'value': 'string' }, ] )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service to create the task set in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to create the task set in.
string
An optional non-unique tag that identifies this task set in external systems. If the task set is associated with a service discovery registry, the tasks in this task set will have the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute set to the provided value.
string
[REQUIRED]
The task definition for the tasks in the task set to use. If a revision isn't specified, the latest ACTIVE revision is used.
dict
An object representing the network configuration for a task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
list
A load balancer object representing the load balancer to use with the task set. The supported load balancer types are either an Application Load Balancer or a Network Load Balancer.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
list
The details of the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
string
The launch type that new tasks in the task set uses. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
If a launchType is specified, the capacityProviderStrategy parameter must be omitted.
list
The capacity provider strategy to use for the task set.
A capacity provider strategy consists of one or more capacity providers along with the base and weight to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE or UPDATING status can be used.
If a capacityProviderStrategy is specified, the launchType parameter must be omitted. If no capacityProviderStrategy or launchType is specified, the defaultCapacityProviderStrategy for the cluster is used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the `CreateCapacityProviderProvider <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCapacityProviderProvider.html>`__API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
string
The platform version that the tasks in the task set uses. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used.
dict
A floating-point percentage of the desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
string
An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 36 ASCII characters in the range of 33-126 (inclusive) are allowed.
list
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both. When a service is deleted, the tags are deleted.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
dict
Response Syntax
{ 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } }
Response Structure
(dict) --
taskSet (dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. A task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
{'cluster': 'string'}Response
{'capacityProvider': {'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER ' '| NONE'}, 'status': {'PROVISIONING', 'DEPROVISIONING'}, 'type': 'EC2_AUTOSCALING | MANAGED_INSTANCES | FARGATE | ' 'FARGATE_SPOT', 'updateStatus': {'CREATE_COMPLETE', 'CREATE_FAILED', 'CREATE_IN_PROGRESS'}}}
Deletes the specified capacity provider.
Prior to a capacity provider being deleted, the capacity provider must be removed from the capacity provider strategy from all services. The UpdateService API can be used to remove a capacity provider from a service's capacity provider strategy. When updating a service, the forceNewDeployment option can be used to ensure that any tasks using the Amazon EC2 instance capacity provided by the capacity provider are transitioned to use the capacity from the remaining capacity providers. Only capacity providers that aren't associated with a cluster can be deleted. To remove a capacity provider from a cluster, you can either use PutClusterCapacityProviders or delete the cluster.
See also: AWS API Documentation
Request Syntax
client.delete_capacity_provider( capacityProvider='string', cluster='string' )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the capacity provider to delete.
string
The name of the cluster that contains the capacity provider to delete. Managed instances capacity providers are cluster-scoped and can only be deleted from their associated cluster.
dict
Response Syntax
{ 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'cluster': 'string', 'status': 'PROVISIONING'|'ACTIVE'|'DEPROVISIONING'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'managedInstancesProvider': { 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' }, 'updateStatus': 'CREATE_IN_PROGRESS'|'CREATE_COMPLETE'|'CREATE_FAILED'|'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'type': 'EC2_AUTOSCALING'|'MANAGED_INSTANCES'|'FARGATE'|'FARGATE_SPOT' } }
Response Structure
(dict) --
capacityProvider (dict) --
The details of the capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
cluster (string) --
The cluster that this capacity provider is associated with. Managed instances capacity providers are cluster-scoped, meaning they can only be used within their associated cluster.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
managedInstancesProvider (dict) --
The configuration for the Amazon ECS Managed Instances provider. This includes the infrastructure role, the launch template configuration, and tag propagation settings.
infrastructureRoleArn (string) --
The Amazon Resource Name (ARN) of the infrastructure role that Amazon ECS assumes to manage instances. This role must include permissions for Amazon EC2 instance lifecycle management, networking, and any additional Amazon Web Services services required for your workloads.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) --
The launch template that defines how Amazon ECS launches Amazon ECS Managed Instances. This includes the instance profile for your tasks, network and storage configuration, and instance requirements that determine which Amazon EC2 instance types can be used.
For more information, see Store instance launch parameters in Amazon EC2 launch templates in the Amazon EC2 User Guide.
ec2InstanceProfileArn (string) --
The Amazon Resource Name (ARN) of the instance profile that Amazon ECS applies to Amazon ECS Managed Instances. This instance profile must include the necessary permissions for your tasks to access Amazon Web Services services and resources.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) --
The network configuration for Amazon ECS Managed Instances. This specifies the subnets and security groups that instances use for network connectivity.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The storage configuration for Amazon ECS Managed Instances. This defines the root volume size and type for the instances.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
vCpuCount (dict) --
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) --
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) --
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) --
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
Determines whether tags from the capacity provider are automatically applied to Amazon ECS Managed Instances. This helps with cost allocation and resource management by ensuring consistent tagging across your infrastructure.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
type (string) --
The type of capacity provider. For Amazon ECS Managed Instances, this value is MANAGED_INSTANCES, indicating that Amazon ECS manages the underlying Amazon EC2 instances on your behalf.
{'service': {'deployments': {'launchType': {'MANAGED_INSTANCES'}}, 'launchType': {'MANAGED_INSTANCES'}, 'taskSets': {'launchType': {'MANAGED_INSTANCES'}}}}
Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you can't delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
See also: AWS API Documentation
Request Syntax
client.delete_service( cluster='string', service='string', force=True|False )
string
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to delete. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
The name of the service to delete.
boolean
If true, allows you to delete a service even if it wasn't scaled down to zero tasks. It's only necessary to use this if the service uses the REPLICA scheduling strategy.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } }Response Structure
(dict) --
service (dict) --
The full description of the deleted service.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) --
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) --
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) --
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
availabilityZoneRebalancing (string) --
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
{'taskDefinitions': {'compatibilities': {'MANAGED_INSTANCES'}, 'requiresCompatibilities': {'MANAGED_INSTANCES'}}}
Deletes one or more task definitions.
You must deregister a task definition revision before you delete it. For more information, see DeregisterTaskDefinition.
When you delete a task definition revision, it is immediately transitions from the INACTIVE to DELETE_IN_PROGRESS. Existing tasks and services that reference a DELETE_IN_PROGRESS task definition revision continue to run without disruption. Existing services that reference a DELETE_IN_PROGRESS task definition revision can still scale up or down by modifying the service's desired count.
You can't use a DELETE_IN_PROGRESS task definition revision to run new tasks or create new services. You also can't update an existing service to reference a DELETE_IN_PROGRESS task definition revision.
A task definition revision will stay in DELETE_IN_PROGRESS status until all the associated tasks and services have been terminated.
When you delete all INACTIVE task definition revisions, the task definition name is not displayed in the console and not returned in the API. If a task definition revisions are in the DELETE_IN_PROGRESS state, the task definition name is displayed in the console and returned in the API. The task definition name is retained by Amazon ECS and the revision is incremented the next time you create a task definition with that name.
See also: AWS API Documentation
Request Syntax
client.delete_task_definitions( taskDefinitions=[ 'string', ] )
list
[REQUIRED]
The family and revision ( family:revision) or full Amazon Resource Name (ARN) of the task definition to delete. You must specify a revision.
You can specify up to 10 task definitions as a comma separated list.
(string) --
dict
Response Syntax
{ 'taskDefinitions': [ { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }
Response Structure
(dict) --
taskDefinitions (list) --
The list of deleted task definitions.
(dict) --
The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
taskDefinitionArn (string) --
The full Amazon Resource Name (ARN) of the task definition.
containerDefinitions (list) --
A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide.
(dict) --
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
name (string) --
The name of a container. If you're linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the docker container create command and the --name option to docker run.
image (string) --
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image in the docker container create command and the IMAGE parameter of docker run.
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
Images in Amazon ECR repositories can be specified by either using the full registry/repository:tag or registry/repository@digest. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE.
Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization name (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name (for example, quay.io/assemblyline/ubuntu).
repositoryCredentials (dict) --
The private repository authentication credentials to use.
credentialsParameter (string) --
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
cpu (integer) --
The number of cpu units reserved for the container. This parameter maps to CpuShares in the docker container create commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
Agent versions greater than or equal to 1.84.0: CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0, which Windows interprets as 1% of one CPU.
memory (integer) --
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the docker container create command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the docker container create command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
links (list) --
The links parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge. The name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker container create command and the --link option to docker run.
(string) --
portMappings (list) --
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc network mode, only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to none, then you can't specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.
(dict) --
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Most fields of this parameter ( containerPort, hostPort, protocol) maps to PortBindings in the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to host, host ports must either be undefined or match the container port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
containerPort (integer) --
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.
If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
hostPort (integer) --
The port number on the container instance to reserve for your container.
If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:
For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.
If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.
If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
protocol (string) --
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp. protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
name (string) --
The name that's used for the port mapping. This parameter is the name that you use in the serviceConnectConfiguration and the vpcLatticeConfigurations of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
appProtocol (string) --
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
essential (boolean) --
If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide.
restartPolicy (dict) --
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether a restart policy is enabled for the container.
ignoredExitCodes (list) --
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer) --
restartAttemptPeriod (integer) --
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.
entryPoint (list) --
The entry point that's passed to the container. This parameter maps to Entrypoint in the docker container create command and the --entrypoint option to docker run.
(string) --
command (list) --
The command that's passed to the container. This parameter maps to Cmd in the docker container create command and the COMMAND parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.
(string) --
environment (list) --
The environment variables to pass to a container. This parameter maps to Env in the docker container create command and the --env option to docker run.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
mountPoints (list) --
The mount points for data volumes in your container.
This parameter maps to Volumes in the docker container create command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
(dict) --
The details for a volume mount point that's used in a container definition.
sourceVolume (string) --
The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume.
containerPath (string) --
The path on the container to mount the host volume at.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
volumesFrom (list) --
Data volumes to mount from another container. This parameter maps to VolumesFrom in the docker container create command and the --volumes-from option to docker run.
(dict) --
Details on a data volume from another container in the same task definition.
sourceContainer (string) --
The name of another container within the same task definition to mount volumes from.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
linuxParameters (dict) --
Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities.
capabilities (dict) --
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add (list) --
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
drop (list) --
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
devices (list) --
Any host devices to expose to the container. This parameter maps to Devices in the docker container create command and the --device option to docker run.
(dict) --
An object representing a container instance host device.
hostPath (string) --
The path for the device on the host container instance.
containerPath (string) --
The path inside the container at which to expose the host device.
permissions (list) --
The explicit permissions to provide to the container for the device. By default, the container has permissions for read, write, and mknod for the device.
(string) --
initProcessEnabled (boolean) --
Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
sharedMemorySize (integer) --
The value for the size (in MiB) of the /dev/shm volume. This parameter maps to the --shm-size option to docker run.
tmpfs (list) --
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs option to docker run.
(dict) --
The container path, mount options, and size of the tmpfs mount.
containerPath (string) --
The absolute file path where the tmpfs volume is to be mounted.
size (integer) --
The maximum size (in MiB) of the tmpfs volume.
mountOptions (list) --
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
(string) --
maxSwap (integer) --
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap option to docker run where the value would be the sum of the container memory plus the maxSwap value.
If a maxSwap value of 0 is specified, the container will not use swap. Accepted values are 0 or any positive integer. If the maxSwap parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap value must be set for the swappiness parameter to be used.
swappiness (integer) --
This allows you to tune a container's memory swappiness behavior. A swappiness value of 0 will cause swapping to not happen unless absolutely necessary. A swappiness value of 100 will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0 and 100. If the swappiness parameter is not specified, a default value of 60 is used. If a value is not specified for maxSwap then this parameter is ignored. This parameter maps to the --memory-swappiness option to docker run.
secrets (list) --
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
dependsOn (list) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
(dict) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
containerName (string) --
The name of a container.
condition (string) --
The dependency condition of the container. The following are the available conditions and their behavior:
START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
SUCCESS - This condition is the same as COMPLETE, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
startTimeout (integer) --
Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
stopTimeout (integer) --
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
versionConsistency (string) --
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is enabled. If you set the value for a container as disabled, Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the Amazon ECS Developer Guide.
hostname (string) --
The hostname to use for your container. This parameter maps to Hostname in the docker container create command and the --hostname option to docker run.
user (string) --
The user to use inside the container. This parameter maps to User in the docker container create command and the --user option to docker run.
You can specify the user using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
workingDirectory (string) --
The working directory to run commands inside the container in. This parameter maps to WorkingDir in the docker container create command and the --workdir option to docker run.
disableNetworking (boolean) --
When this parameter is true, networking is off within the container. This parameter maps to NetworkDisabled in the docker container create command.
privileged (boolean) --
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the docker container create command and the --privileged option to docker run
readonlyRootFilesystem (boolean) --
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the docker container create command and the --read-only option to docker run.
dnsServers (list) --
A list of DNS servers that are presented to the container. This parameter maps to Dns in the docker container create command and the --dns option to docker run.
(string) --
dnsSearchDomains (list) --
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the docker container create command and the --dns-search option to docker run.
(string) --
extraHosts (list) --
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the docker container create command and the --add-host option to docker run.
(dict) --
Hostnames and IP address entries that are added to the /etc/hosts file of a container via the extraHosts parameter of its ContainerDefinition.
hostname (string) --
The hostname to use in the /etc/hosts entry.
ipAddress (string) --
The IP address to use in the /etc/hosts entry.
dockerSecurityOptions (list) --
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type.
For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.
For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt in the docker container create command and the --security-opt option to docker run.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
(string) --
interactive (boolean) --
When this parameter is true, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps to OpenStdin in the docker container create command and the --interactive option to docker run.
pseudoTerminal (boolean) --
When this parameter is true, a TTY is allocated. This parameter maps to Tty in the docker container create command and the --tty option to docker run.
dockerLabels (dict) --
A key/value map of labels to add to the container. This parameter maps to Labels in the docker container create command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
ulimits (list) --
A list of ulimits to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits in the docker container create command and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(dict) --
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.
name (string) --
The type of the ulimit.
softLimit (integer) --
The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
hardLimit (integer) --
The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
logConfiguration (dict) --
The log configuration specification for the container.
This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
healthCheck (dict) --
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the docker container create command and the HEALTHCHECK parameter of docker run.
command (list) --
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.
(string) --
interval (integer) --
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command.
timeout (integer) --
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command.
retries (integer) --
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command.
startPeriod (integer) --
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command.
systemControls (list) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
(dict) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that's started last determines which systemControls take effect.
For tasks that use the host network mode, the network namespace systemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the host IPC mode, IPC namespace systemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.
namespace (string) --
The namespaced kernel parameter to set a value for.
value (string) --
The namespaced kernel parameter to set a value for.
Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", and Sysctls that start with "fs.mqueue.*"
Valid network namespace values: Sysctls that start with "net.*". Only namespaced Sysctls that exist within the container starting with "net.* are accepted.
All of these values are supported by Fargate.
resourceRequirements (list) --
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
firelensConfiguration (dict) --
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
type (string) --
The log router to use. The valid values are fluentd or fluentbit.
options (dict) --
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide.
(string) --
(string) --
credentialSpecs (list) --
A list of ARNs in SSM or Amazon S3 to a credential spec ( CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.
There are two formats for each ARN.
credentialspecdomainless:MyARN
You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.
Each task that runs on any container instance can join different domains.
You can use this format without joining the container instance to a domain.
credentialspec:MyARN
You use credentialspec:MyARN to provide a CredSpec for a single domain.
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace MyARN with the ARN in SSM or Amazon S3.
If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
(string) --
family (string) --
The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
taskRoleArn (string) --
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
networkMode (string) --
The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default> or awsvpc can be used. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
revision (integer) --
The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is 1. Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family.
volumes (list) --
The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the Amazon Elastic Container Service Developer Guide.
(dict) --
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.
name (string) --
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the name is required and must also be specified as the volume name in the ServiceVolumeConfiguration or TaskVolumeConfiguration parameter when creating your service or standalone task.
For all other types of volumes, this name is referenced in the sourceVolume parameter of the mountPoints object in the container definition.
When a volume is using the efsVolumeConfiguration, the name is required.
host (dict) --
This parameter is specified when you use bind mount host volumes. The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my\path:C:\my\path or D:\:C:\my\path.
sourcePath (string) --
When the host parameter is used, specify a sourcePath to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath parameter is not supported.
dockerVolumeConfiguration (dict) --
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local driver. To use bind mounts, specify the host parameter instead.
scope (string) --
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision (boolean) --
If this value is true, the Docker volume is created if it doesn't already exist.
driver (string) --
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker container create command and the xxdriver option to docker volume create.
driverOpts (dict) --
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts in the docker create-volume command and the xxopt option to docker volume create.
(string) --
(string) --
labels (dict) --
Custom metadata to add to your Docker volume. This parameter maps to Labels in the docker container create command and the xxlabel option to docker volume create.
(string) --
(string) --
efsVolumeConfiguration (dict) --
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
fileSystemId (string) --
The Amazon EFS file system ID to use.
rootDirectory (string) --
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying / will have the same effect as omitting this parameter.
transitEncryption (string) --
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide.
transitEncryptionPort (integer) --
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide.
authorizationConfig (dict) --
The authorization configuration details for the Amazon EFS file system.
accessPointId (string) --
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration must either be omitted or set to / which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the EFSVolumeConfiguration. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide.
iam (string) --
Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the EFSVolumeConfiguration. If this parameter is omitted, the default value of DISABLED is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide.
fsxWindowsFileServerVolumeConfiguration (dict) --
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
fileSystemId (string) --
The Amazon FSx for Windows File Server file system ID to use.
rootDirectory (string) --
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
authorizationConfig (dict) --
The authorization configuration details for the Amazon FSx for Windows File Server file system.
credentialsParameter (string) --
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
domain (string) --
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
configuredAtLaunch (boolean) --
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.
To configure a volume at launch time, use this task definition revision and specify a volumeConfigurations object when calling the CreateService, UpdateService, RunTask or StartTask APIs.
status (string) --
The status of the task definition.
requiresAttributes (list) --
The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
placementConstraints (list) --
An array of placement constraint objects to use for tasks.
(dict) --
The constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. The MemberOf constraint restricts selection to be from a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
compatibilities (list) --
Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
runtimePlatform (dict) --
The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type.
When you specify a task in a service, this value must match the runtimePlatform value of the service.
cpuArchitecture (string) --
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
operatingSystemFamily (string) --
The operating system.
requiresCompatibilities (list) --
The task launch types the task definition was validated against. The valid values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
cpu (string) --
The number of cpu units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the memory parameter.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs).
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount (in MiB) of memory used by the task.
If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition.
If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
pidMode (string) --
The process namespace to use for the containers in the task. The valid values are host or task. On Fargate for Linux containers, the only valid value is task. For example, monitoring sidecars might need pidMode to access information about other containers running in the same task.
If host is specified, all containers within the tasks that specified the host PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.
If task is specified, all containers within the specified task share the same process namespace.
If no value is specified, the default is a private namespace for each container.
If the host PID mode is used, there's a heightened risk of undesired process namespace exposure.
ipcMode (string) --
The IPC resource namespace to use for the containers in the task. The valid values are host, task, or none. If host is specified, then all containers within the tasks that specified the host IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task is specified, all containers within the specified task share the same IPC resources. If none is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related systemControls will apply to all containers within a task.
proxyConfiguration (dict) --
The configuration details for the App Mesh proxy.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version 20190301 or later, they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
type (string) --
The proxy type. The only supported value is APPMESH.
containerName (string) --
The name of the container that will serve as the App Mesh proxy.
properties (list) --
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID - (Required) The user ID (UID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID is specified, this field can be empty.
IgnoredGID - (Required) The group ID (GID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID is specified, this field can be empty.
AppPorts - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort and ProxyEgressPort.
ProxyIngressPort - (Required) Specifies the port that incoming traffic to the AppPorts is directed to.
ProxyEgressPort - (Required) Specifies the port that outgoing traffic from the AppPorts is directed to.
EgressIgnoredPorts - (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
EgressIgnoredIPs - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
registeredAt (datetime) --
The Unix timestamp for the time when the task definition was registered.
deregisteredAt (datetime) --
The Unix timestamp for the time when the task definition was deregistered.
registeredBy (string) --
The principal that registered the task definition.
ephemeralStorage (dict) --
The ephemeral storage settings to use for tasks run with the task definition.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
enableFaultInjection (boolean) --
Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is false.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'taskSet': {'launchType': {'MANAGED_INSTANCES'}}}
Deletes a specified task set within a service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.delete_task_set( cluster='string', service='string', taskSet='string', force=True|False )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set found in to delete.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service that hosts the task set to delete.
string
[REQUIRED]
The task set ID or full Amazon Resource Name (ARN) of the task set to delete.
boolean
If true, you can delete a task set even if it hasn't been scaled down to zero.
dict
Response Syntax
{ 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } }
Response Structure
(dict) --
taskSet (dict) --
Details about the task set.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
{'taskDefinition': {'compatibilities': {'MANAGED_INSTANCES'}, 'requiresCompatibilities': {'MANAGED_INSTANCES'}}}
Deregisters the specified task definition by family and revision. Upon deregistration, the task definition is marked as INACTIVE. Existing tasks and services that reference an INACTIVE task definition continue to run without disruption. Existing services that reference an INACTIVE task definition can still scale up or down by modifying the service's desired count. If you want to delete a task definition revision, you must first deregister the task definition revision.
You can't use an INACTIVE task definition to run new tasks or create new services, and you can't update an existing service to reference an INACTIVE task definition. However, there may be up to a 10-minute window following deregistration where these restrictions have not yet taken effect.
You must deregister a task definition revision before you delete it. For more information, see DeleteTaskDefinitions.
See also: AWS API Documentation
Request Syntax
client.deregister_task_definition( taskDefinition='string' )
string
[REQUIRED]
The family and revision ( family:revision) or full Amazon Resource Name (ARN) of the task definition to deregister. You must specify a revision.
dict
Response Syntax
{ 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False } }
Response Structure
(dict) --
taskDefinition (dict) --
The full description of the deregistered task.
taskDefinitionArn (string) --
The full Amazon Resource Name (ARN) of the task definition.
containerDefinitions (list) --
A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide.
(dict) --
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
name (string) --
The name of a container. If you're linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the docker container create command and the --name option to docker run.
image (string) --
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image in the docker container create command and the IMAGE parameter of docker run.
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
Images in Amazon ECR repositories can be specified by either using the full registry/repository:tag or registry/repository@digest. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE.
Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization name (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name (for example, quay.io/assemblyline/ubuntu).
repositoryCredentials (dict) --
The private repository authentication credentials to use.
credentialsParameter (string) --
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
cpu (integer) --
The number of cpu units reserved for the container. This parameter maps to CpuShares in the docker container create commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
Agent versions greater than or equal to 1.84.0: CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0, which Windows interprets as 1% of one CPU.
memory (integer) --
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the docker container create command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the docker container create command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
links (list) --
The links parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge. The name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker container create command and the --link option to docker run.
(string) --
portMappings (list) --
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc network mode, only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to none, then you can't specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.
(dict) --
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Most fields of this parameter ( containerPort, hostPort, protocol) maps to PortBindings in the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to host, host ports must either be undefined or match the container port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
containerPort (integer) --
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.
If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
hostPort (integer) --
The port number on the container instance to reserve for your container.
If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:
For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.
If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.
If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
protocol (string) --
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp. protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
name (string) --
The name that's used for the port mapping. This parameter is the name that you use in the serviceConnectConfiguration and the vpcLatticeConfigurations of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
appProtocol (string) --
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
essential (boolean) --
If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide.
restartPolicy (dict) --
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether a restart policy is enabled for the container.
ignoredExitCodes (list) --
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer) --
restartAttemptPeriod (integer) --
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.
entryPoint (list) --
The entry point that's passed to the container. This parameter maps to Entrypoint in the docker container create command and the --entrypoint option to docker run.
(string) --
command (list) --
The command that's passed to the container. This parameter maps to Cmd in the docker container create command and the COMMAND parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.
(string) --
environment (list) --
The environment variables to pass to a container. This parameter maps to Env in the docker container create command and the --env option to docker run.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
mountPoints (list) --
The mount points for data volumes in your container.
This parameter maps to Volumes in the docker container create command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
(dict) --
The details for a volume mount point that's used in a container definition.
sourceVolume (string) --
The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume.
containerPath (string) --
The path on the container to mount the host volume at.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
volumesFrom (list) --
Data volumes to mount from another container. This parameter maps to VolumesFrom in the docker container create command and the --volumes-from option to docker run.
(dict) --
Details on a data volume from another container in the same task definition.
sourceContainer (string) --
The name of another container within the same task definition to mount volumes from.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
linuxParameters (dict) --
Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities.
capabilities (dict) --
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add (list) --
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
drop (list) --
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
devices (list) --
Any host devices to expose to the container. This parameter maps to Devices in the docker container create command and the --device option to docker run.
(dict) --
An object representing a container instance host device.
hostPath (string) --
The path for the device on the host container instance.
containerPath (string) --
The path inside the container at which to expose the host device.
permissions (list) --
The explicit permissions to provide to the container for the device. By default, the container has permissions for read, write, and mknod for the device.
(string) --
initProcessEnabled (boolean) --
Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
sharedMemorySize (integer) --
The value for the size (in MiB) of the /dev/shm volume. This parameter maps to the --shm-size option to docker run.
tmpfs (list) --
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs option to docker run.
(dict) --
The container path, mount options, and size of the tmpfs mount.
containerPath (string) --
The absolute file path where the tmpfs volume is to be mounted.
size (integer) --
The maximum size (in MiB) of the tmpfs volume.
mountOptions (list) --
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
(string) --
maxSwap (integer) --
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap option to docker run where the value would be the sum of the container memory plus the maxSwap value.
If a maxSwap value of 0 is specified, the container will not use swap. Accepted values are 0 or any positive integer. If the maxSwap parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap value must be set for the swappiness parameter to be used.
swappiness (integer) --
This allows you to tune a container's memory swappiness behavior. A swappiness value of 0 will cause swapping to not happen unless absolutely necessary. A swappiness value of 100 will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0 and 100. If the swappiness parameter is not specified, a default value of 60 is used. If a value is not specified for maxSwap then this parameter is ignored. This parameter maps to the --memory-swappiness option to docker run.
secrets (list) --
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
dependsOn (list) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
(dict) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
containerName (string) --
The name of a container.
condition (string) --
The dependency condition of the container. The following are the available conditions and their behavior:
START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
SUCCESS - This condition is the same as COMPLETE, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
startTimeout (integer) --
Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
stopTimeout (integer) --
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
versionConsistency (string) --
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is enabled. If you set the value for a container as disabled, Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the Amazon ECS Developer Guide.
hostname (string) --
The hostname to use for your container. This parameter maps to Hostname in the docker container create command and the --hostname option to docker run.
user (string) --
The user to use inside the container. This parameter maps to User in the docker container create command and the --user option to docker run.
You can specify the user using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
workingDirectory (string) --
The working directory to run commands inside the container in. This parameter maps to WorkingDir in the docker container create command and the --workdir option to docker run.
disableNetworking (boolean) --
When this parameter is true, networking is off within the container. This parameter maps to NetworkDisabled in the docker container create command.
privileged (boolean) --
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the docker container create command and the --privileged option to docker run
readonlyRootFilesystem (boolean) --
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the docker container create command and the --read-only option to docker run.
dnsServers (list) --
A list of DNS servers that are presented to the container. This parameter maps to Dns in the docker container create command and the --dns option to docker run.
(string) --
dnsSearchDomains (list) --
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the docker container create command and the --dns-search option to docker run.
(string) --
extraHosts (list) --
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the docker container create command and the --add-host option to docker run.
(dict) --
Hostnames and IP address entries that are added to the /etc/hosts file of a container via the extraHosts parameter of its ContainerDefinition.
hostname (string) --
The hostname to use in the /etc/hosts entry.
ipAddress (string) --
The IP address to use in the /etc/hosts entry.
dockerSecurityOptions (list) --
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type.
For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.
For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt in the docker container create command and the --security-opt option to docker run.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
(string) --
interactive (boolean) --
When this parameter is true, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps to OpenStdin in the docker container create command and the --interactive option to docker run.
pseudoTerminal (boolean) --
When this parameter is true, a TTY is allocated. This parameter maps to Tty in the docker container create command and the --tty option to docker run.
dockerLabels (dict) --
A key/value map of labels to add to the container. This parameter maps to Labels in the docker container create command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
ulimits (list) --
A list of ulimits to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits in the docker container create command and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(dict) --
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.
name (string) --
The type of the ulimit.
softLimit (integer) --
The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
hardLimit (integer) --
The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
logConfiguration (dict) --
The log configuration specification for the container.
This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
healthCheck (dict) --
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the docker container create command and the HEALTHCHECK parameter of docker run.
command (list) --
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.
(string) --
interval (integer) --
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command.
timeout (integer) --
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command.
retries (integer) --
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command.
startPeriod (integer) --
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command.
systemControls (list) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
(dict) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that's started last determines which systemControls take effect.
For tasks that use the host network mode, the network namespace systemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the host IPC mode, IPC namespace systemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.
namespace (string) --
The namespaced kernel parameter to set a value for.
value (string) --
The namespaced kernel parameter to set a value for.
Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", and Sysctls that start with "fs.mqueue.*"
Valid network namespace values: Sysctls that start with "net.*". Only namespaced Sysctls that exist within the container starting with "net.* are accepted.
All of these values are supported by Fargate.
resourceRequirements (list) --
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
firelensConfiguration (dict) --
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
type (string) --
The log router to use. The valid values are fluentd or fluentbit.
options (dict) --
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide.
(string) --
(string) --
credentialSpecs (list) --
A list of ARNs in SSM or Amazon S3 to a credential spec ( CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.
There are two formats for each ARN.
credentialspecdomainless:MyARN
You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.
Each task that runs on any container instance can join different domains.
You can use this format without joining the container instance to a domain.
credentialspec:MyARN
You use credentialspec:MyARN to provide a CredSpec for a single domain.
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace MyARN with the ARN in SSM or Amazon S3.
If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
(string) --
family (string) --
The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
taskRoleArn (string) --
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
networkMode (string) --
The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default> or awsvpc can be used. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
revision (integer) --
The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is 1. Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family.
volumes (list) --
The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the Amazon Elastic Container Service Developer Guide.
(dict) --
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.
name (string) --
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the name is required and must also be specified as the volume name in the ServiceVolumeConfiguration or TaskVolumeConfiguration parameter when creating your service or standalone task.
For all other types of volumes, this name is referenced in the sourceVolume parameter of the mountPoints object in the container definition.
When a volume is using the efsVolumeConfiguration, the name is required.
host (dict) --
This parameter is specified when you use bind mount host volumes. The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my\path:C:\my\path or D:\:C:\my\path.
sourcePath (string) --
When the host parameter is used, specify a sourcePath to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath parameter is not supported.
dockerVolumeConfiguration (dict) --
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local driver. To use bind mounts, specify the host parameter instead.
scope (string) --
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision (boolean) --
If this value is true, the Docker volume is created if it doesn't already exist.
driver (string) --
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker container create command and the xxdriver option to docker volume create.
driverOpts (dict) --
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts in the docker create-volume command and the xxopt option to docker volume create.
(string) --
(string) --
labels (dict) --
Custom metadata to add to your Docker volume. This parameter maps to Labels in the docker container create command and the xxlabel option to docker volume create.
(string) --
(string) --
efsVolumeConfiguration (dict) --
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
fileSystemId (string) --
The Amazon EFS file system ID to use.
rootDirectory (string) --
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying / will have the same effect as omitting this parameter.
transitEncryption (string) --
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide.
transitEncryptionPort (integer) --
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide.
authorizationConfig (dict) --
The authorization configuration details for the Amazon EFS file system.
accessPointId (string) --
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration must either be omitted or set to / which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the EFSVolumeConfiguration. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide.
iam (string) --
Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the EFSVolumeConfiguration. If this parameter is omitted, the default value of DISABLED is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide.
fsxWindowsFileServerVolumeConfiguration (dict) --
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
fileSystemId (string) --
The Amazon FSx for Windows File Server file system ID to use.
rootDirectory (string) --
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
authorizationConfig (dict) --
The authorization configuration details for the Amazon FSx for Windows File Server file system.
credentialsParameter (string) --
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
domain (string) --
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
configuredAtLaunch (boolean) --
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.
To configure a volume at launch time, use this task definition revision and specify a volumeConfigurations object when calling the CreateService, UpdateService, RunTask or StartTask APIs.
status (string) --
The status of the task definition.
requiresAttributes (list) --
The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
placementConstraints (list) --
An array of placement constraint objects to use for tasks.
(dict) --
The constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. The MemberOf constraint restricts selection to be from a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
compatibilities (list) --
Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
runtimePlatform (dict) --
The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type.
When you specify a task in a service, this value must match the runtimePlatform value of the service.
cpuArchitecture (string) --
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
operatingSystemFamily (string) --
The operating system.
requiresCompatibilities (list) --
The task launch types the task definition was validated against. The valid values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
cpu (string) --
The number of cpu units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the memory parameter.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs).
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount (in MiB) of memory used by the task.
If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition.
If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
pidMode (string) --
The process namespace to use for the containers in the task. The valid values are host or task. On Fargate for Linux containers, the only valid value is task. For example, monitoring sidecars might need pidMode to access information about other containers running in the same task.
If host is specified, all containers within the tasks that specified the host PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.
If task is specified, all containers within the specified task share the same process namespace.
If no value is specified, the default is a private namespace for each container.
If the host PID mode is used, there's a heightened risk of undesired process namespace exposure.
ipcMode (string) --
The IPC resource namespace to use for the containers in the task. The valid values are host, task, or none. If host is specified, then all containers within the tasks that specified the host IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task is specified, all containers within the specified task share the same IPC resources. If none is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related systemControls will apply to all containers within a task.
proxyConfiguration (dict) --
The configuration details for the App Mesh proxy.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version 20190301 or later, they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
type (string) --
The proxy type. The only supported value is APPMESH.
containerName (string) --
The name of the container that will serve as the App Mesh proxy.
properties (list) --
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID - (Required) The user ID (UID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID is specified, this field can be empty.
IgnoredGID - (Required) The group ID (GID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID is specified, this field can be empty.
AppPorts - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort and ProxyEgressPort.
ProxyIngressPort - (Required) Specifies the port that incoming traffic to the AppPorts is directed to.
ProxyEgressPort - (Required) Specifies the port that outgoing traffic from the AppPorts is directed to.
EgressIgnoredPorts - (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
EgressIgnoredIPs - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
registeredAt (datetime) --
The Unix timestamp for the time when the task definition was registered.
deregisteredAt (datetime) --
The Unix timestamp for the time when the task definition was deregistered.
registeredBy (string) --
The principal that registered the task definition.
ephemeralStorage (dict) --
The ephemeral storage settings to use for tasks run with the task definition.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
enableFaultInjection (boolean) --
Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is false.
{'cluster': 'string'}Response
{'capacityProviders': {'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER ' '| NONE'}, 'status': {'PROVISIONING', 'DEPROVISIONING'}, 'type': 'EC2_AUTOSCALING | MANAGED_INSTANCES | FARGATE ' '| FARGATE_SPOT', 'updateStatus': {'CREATE_COMPLETE', 'CREATE_FAILED', 'CREATE_IN_PROGRESS'}}}
Describes one or more of your capacity providers.
See also: AWS API Documentation
Request Syntax
client.describe_capacity_providers( capacityProviders=[ 'string', ], cluster='string', include=[ 'TAGS', ], maxResults=123, nextToken='string' )
list
The short name or full Amazon Resource Name (ARN) of one or more capacity providers. Up to 100 capacity providers can be described in an action.
(string) --
string
The name of the cluster to describe capacity providers for. When specified, only capacity providers associated with this cluster are returned, including Amazon ECS Managed Instances capacity providers.
list
Specifies whether or not you want to see the resource tags for the capacity provider. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
integer
The maximum number of account setting results returned by DescribeCapacityProviders in paginated output. When this parameter is used, DescribeCapacityProviders only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another DescribeCapacityProviders request with the returned nextToken value. This value can be between 1 and 10. If this parameter is not used, then DescribeCapacityProviders returns up to 10 results and a nextToken value if applicable.
string
The nextToken value returned from a previous paginated DescribeCapacityProviders request where maxResults was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken value.
dict
Response Syntax
{ 'capacityProviders': [ { 'capacityProviderArn': 'string', 'name': 'string', 'cluster': 'string', 'status': 'PROVISIONING'|'ACTIVE'|'DEPROVISIONING'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'managedInstancesProvider': { 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' }, 'updateStatus': 'CREATE_IN_PROGRESS'|'CREATE_COMPLETE'|'CREATE_FAILED'|'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'type': 'EC2_AUTOSCALING'|'MANAGED_INSTANCES'|'FARGATE'|'FARGATE_SPOT' }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ], 'nextToken': 'string' }
Response Structure
(dict) --
capacityProviders (list) --
The list of capacity providers.
(dict) --
The details for a capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
cluster (string) --
The cluster that this capacity provider is associated with. Managed instances capacity providers are cluster-scoped, meaning they can only be used within their associated cluster.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
managedInstancesProvider (dict) --
The configuration for the Amazon ECS Managed Instances provider. This includes the infrastructure role, the launch template configuration, and tag propagation settings.
infrastructureRoleArn (string) --
The Amazon Resource Name (ARN) of the infrastructure role that Amazon ECS assumes to manage instances. This role must include permissions for Amazon EC2 instance lifecycle management, networking, and any additional Amazon Web Services services required for your workloads.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) --
The launch template that defines how Amazon ECS launches Amazon ECS Managed Instances. This includes the instance profile for your tasks, network and storage configuration, and instance requirements that determine which Amazon EC2 instance types can be used.
For more information, see Store instance launch parameters in Amazon EC2 launch templates in the Amazon EC2 User Guide.
ec2InstanceProfileArn (string) --
The Amazon Resource Name (ARN) of the instance profile that Amazon ECS applies to Amazon ECS Managed Instances. This instance profile must include the necessary permissions for your tasks to access Amazon Web Services services and resources.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) --
The network configuration for Amazon ECS Managed Instances. This specifies the subnets and security groups that instances use for network connectivity.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The storage configuration for Amazon ECS Managed Instances. This defines the root volume size and type for the instances.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
vCpuCount (dict) --
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) --
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) --
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) --
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
Determines whether tags from the capacity provider are automatically applied to Amazon ECS Managed Instances. This helps with cost allocation and resource management by ensuring consistent tagging across your infrastructure.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
type (string) --
The type of capacity provider. For Amazon ECS Managed Instances, this value is MANAGED_INSTANCES, indicating that Amazon ECS manages the underlying Amazon EC2 instances on your behalf.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
nextToken (string) --
The nextToken value to include in a future DescribeCapacityProviders request. When the results of a DescribeCapacityProviders request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.
{'serviceRevisions': {'launchType': {'MANAGED_INSTANCES'}}}
Describes one or more service revisions.
A service revision is a version of the service that includes the values for the Amazon ECS resources (for example, task definition) and the environment resources (for example, load balancers, subnets, and security groups). For more information, see Amazon ECS service revisions.
You can't describe a service revision that was created before October 25, 2024.
See also: AWS API Documentation
Request Syntax
client.describe_service_revisions( serviceRevisionArns=[ 'string', ] )
list
[REQUIRED]
The ARN of the service revision.
You can specify a maximum of 20 ARNs.
You can call ListServiceDeployments to get the ARNs.
(string) --
dict
Response Syntax
{ 'serviceRevisions': [ { 'serviceRevisionArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'taskDefinition': 'string', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'platformVersion': 'string', 'platformFamily': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'containerImages': [ { 'containerName': 'string', 'imageDigest': 'string', 'image': 'string' }, ], 'guardDutyEnabled': True|False, 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'createdAt': datetime(2015, 1, 1), 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ], 'resolvedConfiguration': { 'loadBalancers': [ { 'targetGroupArn': 'string', 'productionListenerRule': 'string' }, ] } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
serviceRevisions (list) --
The list of service revisions described.
(dict) --
Information about the service revision.
A service revision contains a record of the workload configuration Amazon ECS is attempting to deploy. Whenever you create or deploy a service, Amazon ECS automatically creates and captures the configuration that you're trying to deploy in the service revision. For information about service revisions, see Amazon ECS service revisions in the Amazon Elastic Container Service Developer Guide .
serviceRevisionArn (string) --
The ARN of the service revision.
serviceArn (string) --
The ARN of the service for the service revision.
clusterArn (string) --
The ARN of the cluster that hosts the service.
taskDefinition (string) --
The task definition the service revision uses.
capacityProviderStrategy (list) --
The capacity provider strategy the service revision uses.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
launchType (string) --
The launch type the service revision uses.
platformVersion (string) --
For the Fargate launch type, the platform version the service revision uses.
platformFamily (string) --
The platform family the service revision uses.
loadBalancers (list) --
The load balancers the service revision uses.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The service registries (for Service Discovery) the service revision uses.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
networkConfiguration (dict) --
The network configuration for a task or service.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
containerImages (list) --
The container images the service revision uses.
(dict) --
The details about the container image a service revision uses.
To ensure that all tasks in a service use the same container image, Amazon ECS resolves container image names and any image tags specified in the task definition to container image digests.
After the container image digest has been established, Amazon ECS uses the digest to start any other desired tasks, and for any future service and service revision updates. This leads to all tasks in a service always running identical container images, resulting in version consistency for your software. For more information, see Container image resolution in the Amazon ECS Developer Guide.
containerName (string) --
The name of the container.
imageDigest (string) --
The container image digest.
image (string) --
The container image.
guardDutyEnabled (boolean) --
Indicates whether Runtime Monitoring is turned on.
serviceConnectConfiguration (dict) --
The Service Connect configuration of your Amazon ECS service. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) --
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) --
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) --
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
volumeConfigurations (list) --
The volumes that are configured at deployment that the service revision uses.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The amount of ephemeral storage to allocate for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
createdAt (datetime) --
The time that the service revision was created. The format is yyyy-mm-dd HH:mm:ss.SSSSS.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service revision.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
resolvedConfiguration (dict) --
The resolved configuration for the service revision which contains the actual resources your service revision uses, such as which target groups serve traffic.
loadBalancers (list) --
The resolved load balancer configuration for the service revision. This includes information about which target groups serve traffic and which listener rules direct traffic to them.
(dict) --
The resolved load balancer configuration for a service revision. This includes information about which target groups serve traffic and which listener rules direct traffic to them.
targetGroupArn (string) --
The Amazon Resource Name (ARN) of the target group associated with the service revision.
productionListenerRule (string) --
The Amazon Resource Name (ARN) of the production listener rule or listener that directs traffic to the target group associated with the service revision.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'services': {'deployments': {'launchType': {'MANAGED_INSTANCES'}}, 'launchType': {'MANAGED_INSTANCES'}, 'taskSets': {'launchType': {'MANAGED_INSTANCES'}}}}
Describes the specified services running in your cluster.
See also: AWS API Documentation
Request Syntax
client.describe_services( cluster='string', services=[ 'string', ], include=[ 'TAGS', ] )
string
The short name or full Amazon Resource Name (ARN)the cluster that hosts the service to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the service or services you are describing were launched in any cluster other than the default cluster.
list
[REQUIRED]
A list of services to describe. You may specify up to 10 services to describe in a single operation.
(string) --
list
Determines whether you want to see the resource tags for the service. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
dict
Response Syntax
{ 'services': [ { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
services (list) --
The list of services described.
(dict) --
Details on a service within a cluster.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) --
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) --
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) --
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
availabilityZoneRebalancing (string) --
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'taskDefinition': {'compatibilities': {'MANAGED_INSTANCES'}, 'requiresCompatibilities': {'MANAGED_INSTANCES'}}}
Describes a task definition. You can specify a family and revision to find information about a specific task definition, or you can simply specify the family to find the latest ACTIVE revision in that family.
See also: AWS API Documentation
Request Syntax
client.describe_task_definition( taskDefinition='string', include=[ 'TAGS', ] )
string
[REQUIRED]
The family for the latest ACTIVE revision, family and revision ( family:revision) for a specific revision in the family, or full Amazon Resource Name (ARN) of the task definition to describe.
list
Determines whether to see the resource tags for the task definition. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
dict
Response Syntax
{ 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, 'tags': [ { 'key': 'string', 'value': 'string' }, ] }
Response Structure
(dict) --
taskDefinition (dict) --
The full task definition description.
taskDefinitionArn (string) --
The full Amazon Resource Name (ARN) of the task definition.
containerDefinitions (list) --
A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide.
(dict) --
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
name (string) --
The name of a container. If you're linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the docker container create command and the --name option to docker run.
image (string) --
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image in the docker container create command and the IMAGE parameter of docker run.
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
Images in Amazon ECR repositories can be specified by either using the full registry/repository:tag or registry/repository@digest. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE.
Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization name (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name (for example, quay.io/assemblyline/ubuntu).
repositoryCredentials (dict) --
The private repository authentication credentials to use.
credentialsParameter (string) --
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
cpu (integer) --
The number of cpu units reserved for the container. This parameter maps to CpuShares in the docker container create commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
Agent versions greater than or equal to 1.84.0: CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0, which Windows interprets as 1% of one CPU.
memory (integer) --
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the docker container create command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the docker container create command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
links (list) --
The links parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge. The name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker container create command and the --link option to docker run.
(string) --
portMappings (list) --
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc network mode, only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to none, then you can't specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.
(dict) --
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Most fields of this parameter ( containerPort, hostPort, protocol) maps to PortBindings in the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to host, host ports must either be undefined or match the container port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
containerPort (integer) --
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.
If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
hostPort (integer) --
The port number on the container instance to reserve for your container.
If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:
For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.
If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.
If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
protocol (string) --
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp. protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
name (string) --
The name that's used for the port mapping. This parameter is the name that you use in the serviceConnectConfiguration and the vpcLatticeConfigurations of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
appProtocol (string) --
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
essential (boolean) --
If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide.
restartPolicy (dict) --
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether a restart policy is enabled for the container.
ignoredExitCodes (list) --
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer) --
restartAttemptPeriod (integer) --
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.
entryPoint (list) --
The entry point that's passed to the container. This parameter maps to Entrypoint in the docker container create command and the --entrypoint option to docker run.
(string) --
command (list) --
The command that's passed to the container. This parameter maps to Cmd in the docker container create command and the COMMAND parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.
(string) --
environment (list) --
The environment variables to pass to a container. This parameter maps to Env in the docker container create command and the --env option to docker run.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
mountPoints (list) --
The mount points for data volumes in your container.
This parameter maps to Volumes in the docker container create command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
(dict) --
The details for a volume mount point that's used in a container definition.
sourceVolume (string) --
The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume.
containerPath (string) --
The path on the container to mount the host volume at.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
volumesFrom (list) --
Data volumes to mount from another container. This parameter maps to VolumesFrom in the docker container create command and the --volumes-from option to docker run.
(dict) --
Details on a data volume from another container in the same task definition.
sourceContainer (string) --
The name of another container within the same task definition to mount volumes from.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
linuxParameters (dict) --
Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities.
capabilities (dict) --
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add (list) --
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
drop (list) --
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
devices (list) --
Any host devices to expose to the container. This parameter maps to Devices in the docker container create command and the --device option to docker run.
(dict) --
An object representing a container instance host device.
hostPath (string) --
The path for the device on the host container instance.
containerPath (string) --
The path inside the container at which to expose the host device.
permissions (list) --
The explicit permissions to provide to the container for the device. By default, the container has permissions for read, write, and mknod for the device.
(string) --
initProcessEnabled (boolean) --
Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
sharedMemorySize (integer) --
The value for the size (in MiB) of the /dev/shm volume. This parameter maps to the --shm-size option to docker run.
tmpfs (list) --
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs option to docker run.
(dict) --
The container path, mount options, and size of the tmpfs mount.
containerPath (string) --
The absolute file path where the tmpfs volume is to be mounted.
size (integer) --
The maximum size (in MiB) of the tmpfs volume.
mountOptions (list) --
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
(string) --
maxSwap (integer) --
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap option to docker run where the value would be the sum of the container memory plus the maxSwap value.
If a maxSwap value of 0 is specified, the container will not use swap. Accepted values are 0 or any positive integer. If the maxSwap parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap value must be set for the swappiness parameter to be used.
swappiness (integer) --
This allows you to tune a container's memory swappiness behavior. A swappiness value of 0 will cause swapping to not happen unless absolutely necessary. A swappiness value of 100 will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0 and 100. If the swappiness parameter is not specified, a default value of 60 is used. If a value is not specified for maxSwap then this parameter is ignored. This parameter maps to the --memory-swappiness option to docker run.
secrets (list) --
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
dependsOn (list) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
(dict) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
containerName (string) --
The name of a container.
condition (string) --
The dependency condition of the container. The following are the available conditions and their behavior:
START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
SUCCESS - This condition is the same as COMPLETE, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
startTimeout (integer) --
Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
stopTimeout (integer) --
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
versionConsistency (string) --
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is enabled. If you set the value for a container as disabled, Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the Amazon ECS Developer Guide.
hostname (string) --
The hostname to use for your container. This parameter maps to Hostname in the docker container create command and the --hostname option to docker run.
user (string) --
The user to use inside the container. This parameter maps to User in the docker container create command and the --user option to docker run.
You can specify the user using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
workingDirectory (string) --
The working directory to run commands inside the container in. This parameter maps to WorkingDir in the docker container create command and the --workdir option to docker run.
disableNetworking (boolean) --
When this parameter is true, networking is off within the container. This parameter maps to NetworkDisabled in the docker container create command.
privileged (boolean) --
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the docker container create command and the --privileged option to docker run
readonlyRootFilesystem (boolean) --
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the docker container create command and the --read-only option to docker run.
dnsServers (list) --
A list of DNS servers that are presented to the container. This parameter maps to Dns in the docker container create command and the --dns option to docker run.
(string) --
dnsSearchDomains (list) --
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the docker container create command and the --dns-search option to docker run.
(string) --
extraHosts (list) --
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the docker container create command and the --add-host option to docker run.
(dict) --
Hostnames and IP address entries that are added to the /etc/hosts file of a container via the extraHosts parameter of its ContainerDefinition.
hostname (string) --
The hostname to use in the /etc/hosts entry.
ipAddress (string) --
The IP address to use in the /etc/hosts entry.
dockerSecurityOptions (list) --
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type.
For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.
For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt in the docker container create command and the --security-opt option to docker run.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
(string) --
interactive (boolean) --
When this parameter is true, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps to OpenStdin in the docker container create command and the --interactive option to docker run.
pseudoTerminal (boolean) --
When this parameter is true, a TTY is allocated. This parameter maps to Tty in the docker container create command and the --tty option to docker run.
dockerLabels (dict) --
A key/value map of labels to add to the container. This parameter maps to Labels in the docker container create command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
ulimits (list) --
A list of ulimits to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits in the docker container create command and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(dict) --
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.
name (string) --
The type of the ulimit.
softLimit (integer) --
The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
hardLimit (integer) --
The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
logConfiguration (dict) --
The log configuration specification for the container.
This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
healthCheck (dict) --
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the docker container create command and the HEALTHCHECK parameter of docker run.
command (list) --
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.
(string) --
interval (integer) --
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command.
timeout (integer) --
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command.
retries (integer) --
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command.
startPeriod (integer) --
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command.
systemControls (list) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
(dict) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that's started last determines which systemControls take effect.
For tasks that use the host network mode, the network namespace systemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the host IPC mode, IPC namespace systemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.
namespace (string) --
The namespaced kernel parameter to set a value for.
value (string) --
The namespaced kernel parameter to set a value for.
Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", and Sysctls that start with "fs.mqueue.*"
Valid network namespace values: Sysctls that start with "net.*". Only namespaced Sysctls that exist within the container starting with "net.* are accepted.
All of these values are supported by Fargate.
resourceRequirements (list) --
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
firelensConfiguration (dict) --
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
type (string) --
The log router to use. The valid values are fluentd or fluentbit.
options (dict) --
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide.
(string) --
(string) --
credentialSpecs (list) --
A list of ARNs in SSM or Amazon S3 to a credential spec ( CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.
There are two formats for each ARN.
credentialspecdomainless:MyARN
You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.
Each task that runs on any container instance can join different domains.
You can use this format without joining the container instance to a domain.
credentialspec:MyARN
You use credentialspec:MyARN to provide a CredSpec for a single domain.
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace MyARN with the ARN in SSM or Amazon S3.
If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
(string) --
family (string) --
The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
taskRoleArn (string) --
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
networkMode (string) --
The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default> or awsvpc can be used. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
revision (integer) --
The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is 1. Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family.
volumes (list) --
The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the Amazon Elastic Container Service Developer Guide.
(dict) --
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.
name (string) --
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the name is required and must also be specified as the volume name in the ServiceVolumeConfiguration or TaskVolumeConfiguration parameter when creating your service or standalone task.
For all other types of volumes, this name is referenced in the sourceVolume parameter of the mountPoints object in the container definition.
When a volume is using the efsVolumeConfiguration, the name is required.
host (dict) --
This parameter is specified when you use bind mount host volumes. The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my\path:C:\my\path or D:\:C:\my\path.
sourcePath (string) --
When the host parameter is used, specify a sourcePath to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath parameter is not supported.
dockerVolumeConfiguration (dict) --
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local driver. To use bind mounts, specify the host parameter instead.
scope (string) --
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision (boolean) --
If this value is true, the Docker volume is created if it doesn't already exist.
driver (string) --
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker container create command and the xxdriver option to docker volume create.
driverOpts (dict) --
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts in the docker create-volume command and the xxopt option to docker volume create.
(string) --
(string) --
labels (dict) --
Custom metadata to add to your Docker volume. This parameter maps to Labels in the docker container create command and the xxlabel option to docker volume create.
(string) --
(string) --
efsVolumeConfiguration (dict) --
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
fileSystemId (string) --
The Amazon EFS file system ID to use.
rootDirectory (string) --
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying / will have the same effect as omitting this parameter.
transitEncryption (string) --
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide.
transitEncryptionPort (integer) --
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide.
authorizationConfig (dict) --
The authorization configuration details for the Amazon EFS file system.
accessPointId (string) --
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration must either be omitted or set to / which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the EFSVolumeConfiguration. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide.
iam (string) --
Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the EFSVolumeConfiguration. If this parameter is omitted, the default value of DISABLED is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide.
fsxWindowsFileServerVolumeConfiguration (dict) --
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
fileSystemId (string) --
The Amazon FSx for Windows File Server file system ID to use.
rootDirectory (string) --
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
authorizationConfig (dict) --
The authorization configuration details for the Amazon FSx for Windows File Server file system.
credentialsParameter (string) --
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
domain (string) --
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
configuredAtLaunch (boolean) --
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.
To configure a volume at launch time, use this task definition revision and specify a volumeConfigurations object when calling the CreateService, UpdateService, RunTask or StartTask APIs.
status (string) --
The status of the task definition.
requiresAttributes (list) --
The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
placementConstraints (list) --
An array of placement constraint objects to use for tasks.
(dict) --
The constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. The MemberOf constraint restricts selection to be from a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
compatibilities (list) --
Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
runtimePlatform (dict) --
The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type.
When you specify a task in a service, this value must match the runtimePlatform value of the service.
cpuArchitecture (string) --
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
operatingSystemFamily (string) --
The operating system.
requiresCompatibilities (list) --
The task launch types the task definition was validated against. The valid values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
cpu (string) --
The number of cpu units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the memory parameter.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs).
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount (in MiB) of memory used by the task.
If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition.
If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
pidMode (string) --
The process namespace to use for the containers in the task. The valid values are host or task. On Fargate for Linux containers, the only valid value is task. For example, monitoring sidecars might need pidMode to access information about other containers running in the same task.
If host is specified, all containers within the tasks that specified the host PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.
If task is specified, all containers within the specified task share the same process namespace.
If no value is specified, the default is a private namespace for each container.
If the host PID mode is used, there's a heightened risk of undesired process namespace exposure.
ipcMode (string) --
The IPC resource namespace to use for the containers in the task. The valid values are host, task, or none. If host is specified, then all containers within the tasks that specified the host IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task is specified, all containers within the specified task share the same IPC resources. If none is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related systemControls will apply to all containers within a task.
proxyConfiguration (dict) --
The configuration details for the App Mesh proxy.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version 20190301 or later, they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
type (string) --
The proxy type. The only supported value is APPMESH.
containerName (string) --
The name of the container that will serve as the App Mesh proxy.
properties (list) --
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID - (Required) The user ID (UID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID is specified, this field can be empty.
IgnoredGID - (Required) The group ID (GID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID is specified, this field can be empty.
AppPorts - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort and ProxyEgressPort.
ProxyIngressPort - (Required) Specifies the port that incoming traffic to the AppPorts is directed to.
ProxyEgressPort - (Required) Specifies the port that outgoing traffic from the AppPorts is directed to.
EgressIgnoredPorts - (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
EgressIgnoredIPs - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
registeredAt (datetime) --
The Unix timestamp for the time when the task definition was registered.
deregisteredAt (datetime) --
The Unix timestamp for the time when the task definition was deregistered.
registeredBy (string) --
The principal that registered the task definition.
ephemeralStorage (dict) --
The ephemeral storage settings to use for tasks run with the task definition.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
enableFaultInjection (boolean) --
Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is false.
tags (list) --
The metadata that's applied to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
{'taskSets': {'launchType': {'MANAGED_INSTANCES'}}}
Describes the task sets in the specified cluster and service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.describe_task_sets( cluster='string', service='string', taskSets=[ 'string', ], include=[ 'TAGS', ] )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task sets exist in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service that the task sets exist in.
list
The ID or full Amazon Resource Name (ARN) of task sets to describe.
(string) --
list
Specifies whether to see the resource tags for the task set. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
dict
Response Syntax
{ 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }
Response Structure
(dict) --
taskSets (list) --
The list of task sets described.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'tasks': {'launchType': {'MANAGED_INSTANCES'}}}
Describes a specified task or tasks.
Currently, stopped tasks appear in the returned results for at least one hour.
If you have tasks with tags, and then delete the cluster, the tagged tasks are returned in the response. If you create a new cluster with the same name as the deleted cluster, the tagged tasks are not included in the response.
See also: AWS API Documentation
Request Syntax
client.describe_tasks( cluster='string', tasks=[ 'string', ], include=[ 'TAGS', ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task or tasks to describe. If you do not specify a cluster, the default cluster is assumed.
list
[REQUIRED]
A list of up to 100 task IDs or full ARN entries.
(string) --
list
Specifies whether you want to see the resource tags for the task. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
dict
Response Syntax
{ 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
tasks (list) --
The list of tasks.
(dict) --
Details on a task in a cluster.
attachments (list) --
The Elastic Network Adapter that's associated with the task if the task uses the awsvpc network mode.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface, Service Connect, and AmazonElasticBlockStorage.
status (string) --
The status of the attachment. Valid values are PRECREATED, CREATED, ATTACHING, ATTACHED, DETACHING, DETACHED, DELETED, and FAILED.
details (list) --
Details of the attachment.
For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
For Service Connect services, this includes portName, clientAliases, discoveryName, and ingressPortOverride.
For Elastic Block Storage, this includes roleArn, deleteOnTermination, volumeName, volumeId, and statusReason (only when the attachment fails to create or attach).
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attributes (list) --
The attributes of the task
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
availabilityZone (string) --
The Availability Zone for the task.
capacityProviderName (string) --
The capacity provider that's associated with the task.
clusterArn (string) --
The ARN of the cluster that hosts the task.
connectivity (string) --
The connectivity status of a task.
connectivityAt (datetime) --
The Unix timestamp for the time when the task last went into CONNECTED status.
containerInstanceArn (string) --
The ARN of the container instances that host the task.
containers (list) --
The containers that's associated with the task.
(dict) --
A Docker container that's part of a task.
containerArn (string) --
The Amazon Resource Name (ARN) of the container.
taskArn (string) --
The ARN of the task.
name (string) --
The name of the container.
image (string) --
The image used for the container.
imageDigest (string) --
The container image manifest digest.
runtimeId (string) --
The ID of the Docker container.
lastStatus (string) --
The last known status of the container.
exitCode (integer) --
The exit code returned from the container.
reason (string) --
A short (1024 max characters) human-readable string to provide additional details about a running or stopped container.
networkBindings (list) --
The network bindings associated with the container.
(dict) --
Details on the network bindings between a container and its host container instance. After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
bindIP (string) --
The IP address that the container is bound to on the container instance.
containerPort (integer) --
The port number on the container that's used with the network binding.
hostPort (integer) --
The port number on the host that's used with the network binding.
protocol (string) --
The protocol used for the network binding.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
hostPortRange (string) --
The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent.
networkInterfaces (list) --
The network interfaces associated with the container.
(dict) --
An object representing the elastic network interface for tasks that use the awsvpc network mode.
attachmentId (string) --
The attachment ID for the network interface.
privateIpv4Address (string) --
The private IPv4 address for the network interface.
ipv6Address (string) --
The private IPv6 address for the network interface.
healthStatus (string) --
The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as UNKNOWN.
managedAgents (list) --
The details of any Amazon ECS managed agents associated with the container.
(dict) --
Details about the managed agent status for the container.
lastStartedAt (datetime) --
The Unix timestamp for the time when the managed agent was last started.
name (string) --
The name of the managed agent. When the execute command feature is turned on, the managed agent name is ExecuteCommandAgent.
reason (string) --
The reason for why the managed agent is in the state it is in.
lastStatus (string) --
The last known status of the managed agent.
cpu (string) --
The number of CPU units set for the container. The value is 0 if no value was specified in the container definition when the task definition was registered.
memory (string) --
The hard limit (in MiB) of memory set for the container.
memoryReservation (string) --
The soft limit (in MiB) of memory set for the container.
gpuIds (list) --
The IDs of each GPU assigned to the container.
(string) --
cpu (string) --
The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, 1024). It can also be expressed as a string using vCPUs (for example, 1 vCPU or 1 vcpu). String values are converted to an integer that indicates the CPU units when the task definition is registered.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs). If you do not specify a value, the parameter is ignored.
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
createdAt (datetime) --
The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the PENDING state.
desiredStatus (string) --
The desired status of the task. For more information, see Task Lifecycle.
enableExecuteCommand (boolean) --
Determines whether execute command functionality is turned on for this task. If true, execute command functionality is turned on all the containers in the task.
executionStoppedAt (datetime) --
The Unix timestamp for the time when the task execution stopped.
group (string) --
The name of the task group that's associated with the task.
healthStatus (string) --
The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as HEALTHY, the task status also reports as HEALTHY. If any essential containers in the task are reporting as UNHEALTHY or UNKNOWN, the task status also reports as UNHEALTHY or UNKNOWN.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
lastStatus (string) --
The last known status for the task. For more information, see Task Lifecycle.
launchType (string) --
The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, 1024). If it's expressed as a string using GB (for example, 1GB or 1 GB), it's converted to an integer indicating the MiB when the task definition is registered.
If you use the EC2 launch type, this field is optional.
If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
overrides (dict) --
One or more container overrides.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
platformVersion (string) --
The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX.).
pullStartedAt (datetime) --
The Unix timestamp for the time when the container image pull began.
pullStoppedAt (datetime) --
The Unix timestamp for the time when the container image pull completed.
startedAt (datetime) --
The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the PENDING state to the RUNNING state.
startedBy (string) --
The tag specified when a task is started. If an Amazon ECS service started the task, the startedBy parameter contains the deployment ID of that service.
stopCode (string) --
The stop code indicating why a task was stopped. The stoppedReason might contain additional details.
For more information about stop code, see Stopped tasks error codes in the Amazon ECS Developer Guide.
stoppedAt (datetime) --
The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the RUNNING state to the STOPPED state.
stoppedReason (string) --
The reason that the task was stopped.
stoppingAt (datetime) --
The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPING.
tags (list) --
The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
taskArn (string) --
The Amazon Resource Name (ARN) of the task.
taskDefinitionArn (string) --
The ARN of the task definition that creates the task.
version (integer) --
The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the detail object) to verify that the version in your event stream is current.
ephemeralStorage (dict) --
The ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is 20 GiB and the maximum supported value is
200 GiB.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'launchType': {'MANAGED_INSTANCES'}}
Returns a list of services. You can filter the results by cluster, launch type, and scheduling strategy.
See also: AWS API Documentation
Request Syntax
client.list_services( cluster='string', nextToken='string', maxResults=123, launchType='EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', schedulingStrategy='REPLICA'|'DAEMON' )
string
The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the ListServices results. If you do not specify a cluster, the default cluster is assumed.
string
The nextToken value returned from a ListServices request indicating that more results are available to fulfill the request and further calls will be needed. If maxResults was provided, it is possible the number of results to be fewer than maxResults.
integer
The maximum number of service results that ListServices returned in paginated output. When this parameter is used, ListServices only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another ListServices request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn't used, then ListServices returns up to 10 results and a nextToken value if applicable.
string
The launch type to use when filtering the ListServices results.
string
The scheduling strategy to use when filtering the ListServices results.
dict
Response Syntax
{ 'serviceArns': [ 'string', ], 'nextToken': 'string' }
Response Structure
(dict) --
serviceArns (list) --
The list of full ARN entries for each service that's associated with the specified cluster.
(string) --
nextToken (string) --
The nextToken value to include in a future ListServices request. When the results of a ListServices request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.
{'launchType': {'MANAGED_INSTANCES'}}
Returns a list of tasks. You can filter the results by cluster, task definition family, container instance, launch type, what IAM principal started the task, or by the desired status of the task.
Recently stopped tasks might appear in the returned results.
See also: AWS API Documentation
Request Syntax
client.list_tasks( cluster='string', containerInstance='string', family='string', nextToken='string', maxResults=123, startedBy='string', serviceName='string', desiredStatus='RUNNING'|'PENDING'|'STOPPED', launchType='EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES' )
string
The short name or full Amazon Resource Name (ARN) of the cluster to use when filtering the ListTasks results. If you do not specify a cluster, the default cluster is assumed.
string
The container instance ID or full ARN of the container instance to use when filtering the ListTasks results. Specifying a containerInstance limits the results to tasks that belong to that container instance.
string
The name of the task definition family to use when filtering the ListTasks results. Specifying a family limits the results to tasks that belong to that family.
string
The nextToken value returned from a ListTasks request indicating that more results are available to fulfill the request and further calls will be needed. If maxResults was provided, it's possible the number of results to be fewer than maxResults.
integer
The maximum number of task results that ListTasks returned in paginated output. When this parameter is used, ListTasks only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another ListTasks request with the returned nextToken value. This value can be between 1 and 100. If this parameter isn't used, then ListTasks returns up to 100 results and a nextToken value if applicable.
string
The startedBy value to filter the task results with. Specifying a startedBy value limits the results to tasks that were started with that value.
When you specify startedBy as the filter, it must be the only filter that you use.
string
The name of the service to use when filtering the ListTasks results. Specifying a serviceName limits the results to tasks that belong to that service.
string
The task desired status to use when filtering the ListTasks results. Specifying a desiredStatus of STOPPED limits the results to tasks that Amazon ECS has set the desired status to STOPPED. This can be useful for debugging tasks that aren't starting properly or have died or finished. The default status filter is RUNNING, which shows tasks that Amazon ECS has set the desired status to RUNNING.
string
The launch type to use when filtering the ListTasks results.
dict
Response Syntax
{ 'taskArns': [ 'string', ], 'nextToken': 'string' }
Response Structure
(dict) --
taskArns (list) --
The list of task ARN entries for the ListTasks request.
(string) --
nextToken (string) --
The nextToken value to include in a future ListTasks request. When the results of a ListTasks request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.
{'requiresCompatibilities': {'MANAGED_INSTANCES'}}Response
{'taskDefinition': {'compatibilities': {'MANAGED_INSTANCES'}, 'requiresCompatibilities': {'MANAGED_INSTANCES'}}}
Registers a new task definition from the supplied family and containerDefinitions. Optionally, you can add data volumes to your containers with the volumes parameter. For more information about task definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide.
You can specify a role for your task with the taskRoleArn parameter. When you specify a role for a task, its containers can then use the latest versions of the CLI or SDKs to make API requests to the Amazon Web Services services that are specified in the policy that's associated with the role. For more information, see IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
You can specify a Docker networking mode for the containers in your task definition with the networkMode parameter. If you specify the awsvpc network mode, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.register_task_definition( family='string', taskRoleArn='string', executionRoleArn='string', networkMode='bridge'|'host'|'awsvpc'|'none', containerDefinitions=[ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], volumes=[ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], placementConstraints=[ { 'type': 'memberOf', 'expression': 'string' }, ], requiresCompatibilities=[ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], cpu='string', memory='string', tags=[ { 'key': 'string', 'value': 'string' }, ], pidMode='host'|'task', ipcMode='host'|'task'|'none', proxyConfiguration={ 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, inferenceAccelerators=[ { 'deviceName': 'string', 'deviceType': 'string' }, ], ephemeralStorage={ 'sizeInGiB': 123 }, runtimePlatform={ 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, enableFaultInjection=True|False )
string
[REQUIRED]
You must specify a family for a task definition. You can use it track multiple versions of the same task definition. The family is used as a name for your task definition. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
string
The short name or full Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
string
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
string
The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default> or awsvpc can be used. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
list
[REQUIRED]
A list of container definitions in JSON format that describe the different containers that make up your task.
(dict) --
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
name (string) --
The name of a container. If you're linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the docker container create command and the --name option to docker run.
image (string) --
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image in the docker container create command and the IMAGE parameter of docker run.
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
Images in Amazon ECR repositories can be specified by either using the full registry/repository:tag or registry/repository@digest. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE.
Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization name (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name (for example, quay.io/assemblyline/ubuntu).
repositoryCredentials (dict) --
The private repository authentication credentials to use.
credentialsParameter (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
cpu (integer) --
The number of cpu units reserved for the container. This parameter maps to CpuShares in the docker container create commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
Agent versions greater than or equal to 1.84.0: CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0, which Windows interprets as 1% of one CPU.
memory (integer) --
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the docker container create command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the docker container create command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
links (list) --
The links parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge. The name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker container create command and the --link option to docker run.
(string) --
portMappings (list) --
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc network mode, only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to none, then you can't specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.
(dict) --
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Most fields of this parameter ( containerPort, hostPort, protocol) maps to PortBindings in the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to host, host ports must either be undefined or match the container port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
containerPort (integer) --
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.
If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
hostPort (integer) --
The port number on the container instance to reserve for your container.
If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:
For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.
If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.
If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
protocol (string) --
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp. protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
name (string) --
The name that's used for the port mapping. This parameter is the name that you use in the serviceConnectConfiguration and the vpcLatticeConfigurations of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
appProtocol (string) --
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
essential (boolean) --
If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide.
restartPolicy (dict) --
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) -- [REQUIRED]
Specifies whether a restart policy is enabled for the container.
ignoredExitCodes (list) --
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer) --
restartAttemptPeriod (integer) --
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.
entryPoint (list) --
The entry point that's passed to the container. This parameter maps to Entrypoint in the docker container create command and the --entrypoint option to docker run.
(string) --
command (list) --
The command that's passed to the container. This parameter maps to Cmd in the docker container create command and the COMMAND parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.
(string) --
environment (list) --
The environment variables to pass to a container. This parameter maps to Env in the docker container create command and the --env option to docker run.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) -- [REQUIRED]
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
mountPoints (list) --
The mount points for data volumes in your container.
This parameter maps to Volumes in the docker container create command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
(dict) --
The details for a volume mount point that's used in a container definition.
sourceVolume (string) --
The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume.
containerPath (string) --
The path on the container to mount the host volume at.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
volumesFrom (list) --
Data volumes to mount from another container. This parameter maps to VolumesFrom in the docker container create command and the --volumes-from option to docker run.
(dict) --
Details on a data volume from another container in the same task definition.
sourceContainer (string) --
The name of another container within the same task definition to mount volumes from.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
linuxParameters (dict) --
Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities.
capabilities (dict) --
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add (list) --
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
drop (list) --
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
devices (list) --
Any host devices to expose to the container. This parameter maps to Devices in the docker container create command and the --device option to docker run.
(dict) --
An object representing a container instance host device.
hostPath (string) -- [REQUIRED]
The path for the device on the host container instance.
containerPath (string) --
The path inside the container at which to expose the host device.
permissions (list) --
The explicit permissions to provide to the container for the device. By default, the container has permissions for read, write, and mknod for the device.
(string) --
initProcessEnabled (boolean) --
Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
sharedMemorySize (integer) --
The value for the size (in MiB) of the /dev/shm volume. This parameter maps to the --shm-size option to docker run.
tmpfs (list) --
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs option to docker run.
(dict) --
The container path, mount options, and size of the tmpfs mount.
containerPath (string) -- [REQUIRED]
The absolute file path where the tmpfs volume is to be mounted.
size (integer) -- [REQUIRED]
The maximum size (in MiB) of the tmpfs volume.
mountOptions (list) --
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
(string) --
maxSwap (integer) --
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap option to docker run where the value would be the sum of the container memory plus the maxSwap value.
If a maxSwap value of 0 is specified, the container will not use swap. Accepted values are 0 or any positive integer. If the maxSwap parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap value must be set for the swappiness parameter to be used.
swappiness (integer) --
This allows you to tune a container's memory swappiness behavior. A swappiness value of 0 will cause swapping to not happen unless absolutely necessary. A swappiness value of 100 will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0 and 100. If the swappiness parameter is not specified, a default value of 60 is used. If a value is not specified for maxSwap then this parameter is ignored. This parameter maps to the --memory-swappiness option to docker run.
secrets (list) --
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
dependsOn (list) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
(dict) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
containerName (string) -- [REQUIRED]
The name of a container.
condition (string) -- [REQUIRED]
The dependency condition of the container. The following are the available conditions and their behavior:
START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
SUCCESS - This condition is the same as COMPLETE, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
startTimeout (integer) --
Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
stopTimeout (integer) --
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
versionConsistency (string) --
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is enabled. If you set the value for a container as disabled, Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the Amazon ECS Developer Guide.
hostname (string) --
The hostname to use for your container. This parameter maps to Hostname in the docker container create command and the --hostname option to docker run.
user (string) --
The user to use inside the container. This parameter maps to User in the docker container create command and the --user option to docker run.
You can specify the user using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
workingDirectory (string) --
The working directory to run commands inside the container in. This parameter maps to WorkingDir in the docker container create command and the --workdir option to docker run.
disableNetworking (boolean) --
When this parameter is true, networking is off within the container. This parameter maps to NetworkDisabled in the docker container create command.
privileged (boolean) --
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the docker container create command and the --privileged option to docker run
readonlyRootFilesystem (boolean) --
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the docker container create command and the --read-only option to docker run.
dnsServers (list) --
A list of DNS servers that are presented to the container. This parameter maps to Dns in the docker container create command and the --dns option to docker run.
(string) --
dnsSearchDomains (list) --
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the docker container create command and the --dns-search option to docker run.
(string) --
extraHosts (list) --
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the docker container create command and the --add-host option to docker run.
(dict) --
Hostnames and IP address entries that are added to the /etc/hosts file of a container via the extraHosts parameter of its ContainerDefinition.
hostname (string) -- [REQUIRED]
The hostname to use in the /etc/hosts entry.
ipAddress (string) -- [REQUIRED]
The IP address to use in the /etc/hosts entry.
dockerSecurityOptions (list) --
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type.
For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.
For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt in the docker container create command and the --security-opt option to docker run.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
(string) --
interactive (boolean) --
When this parameter is true, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps to OpenStdin in the docker container create command and the --interactive option to docker run.
pseudoTerminal (boolean) --
When this parameter is true, a TTY is allocated. This parameter maps to Tty in the docker container create command and the --tty option to docker run.
dockerLabels (dict) --
A key/value map of labels to add to the container. This parameter maps to Labels in the docker container create command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
ulimits (list) --
A list of ulimits to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits in the docker container create command and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(dict) --
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.
name (string) -- [REQUIRED]
The type of the ulimit.
softLimit (integer) -- [REQUIRED]
The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
hardLimit (integer) -- [REQUIRED]
The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
logConfiguration (dict) --
The log configuration specification for the container.
This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
logDriver (string) -- [REQUIRED]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
healthCheck (dict) --
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the docker container create command and the HEALTHCHECK parameter of docker run.
command (list) -- [REQUIRED]
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.
(string) --
interval (integer) --
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command.
timeout (integer) --
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command.
retries (integer) --
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command.
startPeriod (integer) --
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command.
systemControls (list) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
(dict) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that's started last determines which systemControls take effect.
For tasks that use the host network mode, the network namespace systemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the host IPC mode, IPC namespace systemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.
namespace (string) --
The namespaced kernel parameter to set a value for.
value (string) --
The namespaced kernel parameter to set a value for.
Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", and Sysctls that start with "fs.mqueue.*"
Valid network namespace values: Sysctls that start with "net.*". Only namespaced Sysctls that exist within the container starting with "net.* are accepted.
All of these values are supported by Fargate.
resourceRequirements (list) --
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) -- [REQUIRED]
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) -- [REQUIRED]
The type of resource to assign to a container.
firelensConfiguration (dict) --
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
type (string) -- [REQUIRED]
The log router to use. The valid values are fluentd or fluentbit.
options (dict) --
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide.
(string) --
(string) --
credentialSpecs (list) --
A list of ARNs in SSM or Amazon S3 to a credential spec ( CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.
There are two formats for each ARN.
credentialspecdomainless:MyARN
You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.
Each task that runs on any container instance can join different domains.
You can use this format without joining the container instance to a domain.
credentialspec:MyARN
You use credentialspec:MyARN to provide a CredSpec for a single domain.
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace MyARN with the ARN in SSM or Amazon S3.
If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
(string) --
list
A list of volume definitions in JSON format that containers in your task might use.
(dict) --
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.
name (string) --
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the name is required and must also be specified as the volume name in the ServiceVolumeConfiguration or TaskVolumeConfiguration parameter when creating your service or standalone task.
For all other types of volumes, this name is referenced in the sourceVolume parameter of the mountPoints object in the container definition.
When a volume is using the efsVolumeConfiguration, the name is required.
host (dict) --
This parameter is specified when you use bind mount host volumes. The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my\path:C:\my\path or D:\:C:\my\path.
sourcePath (string) --
When the host parameter is used, specify a sourcePath to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath parameter is not supported.
dockerVolumeConfiguration (dict) --
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local driver. To use bind mounts, specify the host parameter instead.
scope (string) --
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision (boolean) --
If this value is true, the Docker volume is created if it doesn't already exist.
driver (string) --
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker container create command and the xxdriver option to docker volume create.
driverOpts (dict) --
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts in the docker create-volume command and the xxopt option to docker volume create.
(string) --
(string) --
labels (dict) --
Custom metadata to add to your Docker volume. This parameter maps to Labels in the docker container create command and the xxlabel option to docker volume create.
(string) --
(string) --
efsVolumeConfiguration (dict) --
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
fileSystemId (string) -- [REQUIRED]
The Amazon EFS file system ID to use.
rootDirectory (string) --
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying / will have the same effect as omitting this parameter.
transitEncryption (string) --
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide.
transitEncryptionPort (integer) --
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide.
authorizationConfig (dict) --
The authorization configuration details for the Amazon EFS file system.
accessPointId (string) --
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration must either be omitted or set to / which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the EFSVolumeConfiguration. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide.
iam (string) --
Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the EFSVolumeConfiguration. If this parameter is omitted, the default value of DISABLED is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide.
fsxWindowsFileServerVolumeConfiguration (dict) --
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
fileSystemId (string) -- [REQUIRED]
The Amazon FSx for Windows File Server file system ID to use.
rootDirectory (string) -- [REQUIRED]
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
authorizationConfig (dict) -- [REQUIRED]
The authorization configuration details for the Amazon FSx for Windows File Server file system.
credentialsParameter (string) -- [REQUIRED]
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
domain (string) -- [REQUIRED]
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
configuredAtLaunch (boolean) --
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.
To configure a volume at launch time, use this task definition revision and specify a volumeConfigurations object when calling the CreateService, UpdateService, RunTask or StartTask APIs.
list
An array of placement constraint objects to use for the task. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
(dict) --
The constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. The MemberOf constraint restricts selection to be from a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The task launch type that Amazon ECS validates the task definition against. A client exception is returned if the task definition doesn't validate against the compatibilities specified. If no value is specified, the parameter is omitted from the response.
(string) --
string
The number of CPU units used by the task. It can be expressed as an integer using CPU units (for example, 1024) or as a string using vCPUs (for example, 1 vCPU or 1 vcpu) in a task definition. String values are converted to an integer indicating the CPU units when the task definition is registered.
If you're using the EC2 launch type or external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs). If you do not specify a value, the parameter is ignored.
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
string
The amount of memory (in MiB) used by the task. It can be expressed as an integer using MiB (for example , 1024) or as a string using GB (for example, 1GB or 1 GB) in a task definition. String values are converted to an integer indicating the MiB when the task definition is registered.
If using the EC2 launch type, this field is optional.
If using the Fargate launch type, this field is required and you must use one of the following values. This determines your range of supported values for the cpu parameter.
The CPU units cannot be less than 1 vCPU when you use Windows containers on Fargate.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
list
The metadata that you apply to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
string
The process namespace to use for the containers in the task. The valid values are host or task. On Fargate for Linux containers, the only valid value is task. For example, monitoring sidecars might need pidMode to access information about other containers running in the same task.
If host is specified, all containers within the tasks that specified the host PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.
If task is specified, all containers within the specified task share the same process namespace.
If no value is specified, the default is a private namespace for each container.
If the host PID mode is used, there's a heightened risk of undesired process namespace exposure.
string
The IPC resource namespace to use for the containers in the task. The valid values are host, task, or none. If host is specified, then all containers within the tasks that specified the host IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task is specified, all containers within the specified task share the same IPC resources. If none is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related systemControls will apply to all containers within a task.
dict
The configuration details for the App Mesh proxy.
For tasks hosted on Amazon EC2 instances, the container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init package to use a proxy configuration. If your container instances are launched from the Amazon ECS-optimized AMI version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized AMI versions in the Amazon Elastic Container Service Developer Guide.
type (string) --
The proxy type. The only supported value is APPMESH.
containerName (string) -- [REQUIRED]
The name of the container that will serve as the App Mesh proxy.
properties (list) --
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID - (Required) The user ID (UID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID is specified, this field can be empty.
IgnoredGID - (Required) The group ID (GID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID is specified, this field can be empty.
AppPorts - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort and ProxyEgressPort.
ProxyIngressPort - (Required) Specifies the port that incoming traffic to the AppPorts is directed to.
ProxyEgressPort - (Required) Specifies the port that outgoing traffic from the AppPorts is directed to.
EgressIgnoredPorts - (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
EgressIgnoredIPs - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
list
The Elastic Inference accelerators to use for the containers in the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) -- [REQUIRED]
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) -- [REQUIRED]
The Elastic Inference accelerator type to use.
dict
The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate. For more information, see Using data volumes in tasks in the Amazon ECS Developer Guide.
sizeInGiB (integer) -- [REQUIRED]
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
dict
The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.
cpuArchitecture (string) --
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
operatingSystemFamily (string) --
The operating system.
boolean
Enables fault injection when you register your task definition and allows for fault injection requests to be accepted from the task's containers. The default value is false.
dict
Response Syntax
{ 'taskDefinition': { 'taskDefinitionArn': 'string', 'containerDefinitions': [ { 'name': 'string', 'image': 'string', 'repositoryCredentials': { 'credentialsParameter': 'string' }, 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'links': [ 'string', ], 'portMappings': [ { 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'name': 'string', 'appProtocol': 'http'|'http2'|'grpc', 'containerPortRange': 'string' }, ], 'essential': True|False, 'restartPolicy': { 'enabled': True|False, 'ignoredExitCodes': [ 123, ], 'restartAttemptPeriod': 123 }, 'entryPoint': [ 'string', ], 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'mountPoints': [ { 'sourceVolume': 'string', 'containerPath': 'string', 'readOnly': True|False }, ], 'volumesFrom': [ { 'sourceContainer': 'string', 'readOnly': True|False }, ], 'linuxParameters': { 'capabilities': { 'add': [ 'string', ], 'drop': [ 'string', ] }, 'devices': [ { 'hostPath': 'string', 'containerPath': 'string', 'permissions': [ 'read'|'write'|'mknod', ] }, ], 'initProcessEnabled': True|False, 'sharedMemorySize': 123, 'tmpfs': [ { 'containerPath': 'string', 'size': 123, 'mountOptions': [ 'string', ] }, ], 'maxSwap': 123, 'swappiness': 123 }, 'secrets': [ { 'name': 'string', 'valueFrom': 'string' }, ], 'dependsOn': [ { 'containerName': 'string', 'condition': 'START'|'COMPLETE'|'SUCCESS'|'HEALTHY' }, ], 'startTimeout': 123, 'stopTimeout': 123, 'versionConsistency': 'enabled'|'disabled', 'hostname': 'string', 'user': 'string', 'workingDirectory': 'string', 'disableNetworking': True|False, 'privileged': True|False, 'readonlyRootFilesystem': True|False, 'dnsServers': [ 'string', ], 'dnsSearchDomains': [ 'string', ], 'extraHosts': [ { 'hostname': 'string', 'ipAddress': 'string' }, ], 'dockerSecurityOptions': [ 'string', ], 'interactive': True|False, 'pseudoTerminal': True|False, 'dockerLabels': { 'string': 'string' }, 'ulimits': [ { 'name': 'core'|'cpu'|'data'|'fsize'|'locks'|'memlock'|'msgqueue'|'nice'|'nofile'|'nproc'|'rss'|'rtprio'|'rttime'|'sigpending'|'stack', 'softLimit': 123, 'hardLimit': 123 }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] }, 'healthCheck': { 'command': [ 'string', ], 'interval': 123, 'timeout': 123, 'retries': 123, 'startPeriod': 123 }, 'systemControls': [ { 'namespace': 'string', 'value': 'string' }, ], 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ], 'firelensConfiguration': { 'type': 'fluentd'|'fluentbit', 'options': { 'string': 'string' } }, 'credentialSpecs': [ 'string', ] }, ], 'family': 'string', 'taskRoleArn': 'string', 'executionRoleArn': 'string', 'networkMode': 'bridge'|'host'|'awsvpc'|'none', 'revision': 123, 'volumes': [ { 'name': 'string', 'host': { 'sourcePath': 'string' }, 'dockerVolumeConfiguration': { 'scope': 'task'|'shared', 'autoprovision': True|False, 'driver': 'string', 'driverOpts': { 'string': 'string' }, 'labels': { 'string': 'string' } }, 'efsVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'transitEncryption': 'ENABLED'|'DISABLED', 'transitEncryptionPort': 123, 'authorizationConfig': { 'accessPointId': 'string', 'iam': 'ENABLED'|'DISABLED' } }, 'fsxWindowsFileServerVolumeConfiguration': { 'fileSystemId': 'string', 'rootDirectory': 'string', 'authorizationConfig': { 'credentialsParameter': 'string', 'domain': 'string' } }, 'configuredAtLaunch': True|False }, ], 'status': 'ACTIVE'|'INACTIVE'|'DELETE_IN_PROGRESS', 'requiresAttributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'placementConstraints': [ { 'type': 'memberOf', 'expression': 'string' }, ], 'compatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'runtimePlatform': { 'cpuArchitecture': 'X86_64'|'ARM64', 'operatingSystemFamily': 'WINDOWS_SERVER_2019_FULL'|'WINDOWS_SERVER_2019_CORE'|'WINDOWS_SERVER_2016_FULL'|'WINDOWS_SERVER_2004_CORE'|'WINDOWS_SERVER_2022_CORE'|'WINDOWS_SERVER_2022_FULL'|'WINDOWS_SERVER_2025_CORE'|'WINDOWS_SERVER_2025_FULL'|'WINDOWS_SERVER_20H2_CORE'|'LINUX' }, 'requiresCompatibilities': [ 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', ], 'cpu': 'string', 'memory': 'string', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'pidMode': 'host'|'task', 'ipcMode': 'host'|'task'|'none', 'proxyConfiguration': { 'type': 'APPMESH', 'containerName': 'string', 'properties': [ { 'name': 'string', 'value': 'string' }, ] }, 'registeredAt': datetime(2015, 1, 1), 'deregisteredAt': datetime(2015, 1, 1), 'registeredBy': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 }, 'enableFaultInjection': True|False }, 'tags': [ { 'key': 'string', 'value': 'string' }, ] }
Response Structure
(dict) --
taskDefinition (dict) --
The full description of the registered task definition.
taskDefinitionArn (string) --
The full Amazon Resource Name (ARN) of the task definition.
containerDefinitions (list) --
A list of container definitions in JSON format that describe the different containers that make up your task. For more information about container definition parameters and defaults, see Amazon ECS Task Definitions in the Amazon Elastic Container Service Developer Guide.
(dict) --
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
name (string) --
The name of a container. If you're linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. This parameter maps to name in the docker container create command and the --name option to docker run.
image (string) --
The image used to start a container. This string is passed directly to the Docker daemon. By default, images in the Docker Hub registry are available. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest ``. For images using tags (repository-url/image:tag), up to 255 characters total are allowed, including letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs (#). For images using digests (repository-url/image@digest), the 255 character limit applies only to the repository URL and image name (everything before the @ sign). The only supported hash function is sha256, and the hash value after sha256: must be exactly 64 characters (only letters A-F, a-f, and numbers 0-9 are allowed). This parameter maps to ``Image in the docker container create command and the IMAGE parameter of docker run.
When a new task starts, the Amazon ECS container agent pulls the latest version of the specified image and tag for the container to use. However, subsequent updates to a repository image aren't propagated to already running tasks.
Images in Amazon ECR repositories can be specified by either using the full registry/repository:tag or registry/repository@digest. For example, 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>:latest or 012345678910.dkr.ecr.<region-name>.amazonaws.com/<repository-name>@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE.
Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
Images in other repositories on Docker Hub are qualified with an organization name (for example, amazon/amazon-ecs-agent).
Images in other online repositories are qualified further by a domain name (for example, quay.io/assemblyline/ubuntu).
repositoryCredentials (dict) --
The private repository authentication credentials to use.
credentialsParameter (string) --
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
cpu (integer) --
The number of cpu units reserved for the container. This parameter maps to CpuShares in the docker container create commandand the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
Agent versions greater than or equal to 1.84.0: CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as 0, which Windows interprets as 1% of one CPU.
memory (integer) --
The amount (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. The total amount of memory reserved for all containers within a task must be lower than the task memory value, if one is specified. This parameter maps to Memory in the docker container create command and the --memory option to docker run.
If using the Fargate launch type, this parameter is optional.
If using the EC2 launch type, you must specify either a task-level memory value or a container-level memory value. If you specify both a container-level memory and memoryReservation value, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit. However, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the docker container create command and the --memory-reservation option to docker run.
If a task-level memory value is not specified, you must specify a non-zero integer for one or both of memory or memoryReservation in a container definition. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance where the container is placed. Otherwise, the value of memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.
The Docker 20.10.0 or later daemon reserves a minimum of 6 MiB of memory for a container. So, don't specify less than 6 MiB of memory for your containers.
The Docker 19.03.13-ce or earlier daemon reserves a minimum of 4 MiB of memory for a container. So, don't specify less than 4 MiB of memory for your containers.
links (list) --
The links parameter allows containers to communicate with each other without the need for port mappings. This parameter is only supported if the network mode of a task definition is bridge. The name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.. This parameter maps to Links in the docker container create command and the --link option to docker run.
(string) --
portMappings (list) --
The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic.
For task definitions that use the awsvpc network mode, only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There's no loopback for port mappings on Windows, so you can't access a container's mapped port from the host itself.
This parameter maps to PortBindings in the the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to none, then you can't specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.
(dict) --
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort. The hostPort can be left blank or it must be the same value as the containerPort.
Most fields of this parameter ( containerPort, hostPort, protocol) maps to PortBindings in the docker container create command and the --publish option to docker run. If the network mode of a task definition is set to host, host ports must either be undefined or match the container port in the port mapping.
After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
containerPort (integer) --
The port number on the container that's bound to the user-specified or automatically assigned host port.
If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.
If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort. Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.
hostPort (integer) --
The port number on the container instance to reserve for your container.
If you specify a containerPortRange, leave this field empty and the value of the hostPort is set as follows:
For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.
If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort.
If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.
The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.
protocol (string) --
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp. protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
name (string) --
The name that's used for the port mapping. This parameter is the name that you use in the serviceConnectConfiguration and the vpcLatticeConfigurations of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
appProtocol (string) --
The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.
If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.
appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
essential (boolean) --
If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, its failure doesn't affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that's composed of multiple containers, group containers that are used for a common purpose into components, and separate the different components into multiple task definitions. For more information, see Application Architecture in the Amazon Elastic Container Service Developer Guide.
restartPolicy (dict) --
The restart policy for a container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task. For more information, see Restart individual containers in Amazon ECS tasks with container restart policies in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether a restart policy is enabled for the container.
ignoredExitCodes (list) --
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer) --
restartAttemptPeriod (integer) --
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every restartAttemptPeriod seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum restartAttemptPeriod of 60 seconds and a maximum restartAttemptPeriod of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.
entryPoint (list) --
The entry point that's passed to the container. This parameter maps to Entrypoint in the docker container create command and the --entrypoint option to docker run.
(string) --
command (list) --
The command that's passed to the container. This parameter maps to Cmd in the docker container create command and the COMMAND parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.
(string) --
environment (list) --
The environment variables to pass to a container. This parameter maps to Env in the docker container create command and the --env option to docker run.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container. This parameter maps to the --env-file option to docker run.
You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file contains an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying Environment Variables in the Amazon Elastic Container Service Developer Guide.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
mountPoints (list) --
The mount points for data volumes in your container.
This parameter maps to Volumes in the docker container create command and the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives.
(dict) --
The details for a volume mount point that's used in a container definition.
sourceVolume (string) --
The name of the volume to mount. Must be a volume name referenced in the name parameter of task definition volume.
containerPath (string) --
The path on the container to mount the host volume at.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
volumesFrom (list) --
Data volumes to mount from another container. This parameter maps to VolumesFrom in the docker container create command and the --volumes-from option to docker run.
(dict) --
Details on a data volume from another container in the same task definition.
sourceContainer (string) --
The name of another container within the same task definition to mount volumes from.
readOnly (boolean) --
If this value is true, the container has read-only access to the volume. If this value is false, then the container can write to the volume. The default value is false.
linuxParameters (dict) --
Linux-specific modifications that are applied to the default Docker container configuration, such as Linux kernel capabilities. For more information see KernelCapabilities.
capabilities (dict) --
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add (list) --
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the docker container create command and the --cap-add option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
drop (list) --
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the docker container create command and the --cap-drop option to docker run.
Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
(string) --
devices (list) --
Any host devices to expose to the container. This parameter maps to Devices in the docker container create command and the --device option to docker run.
(dict) --
An object representing a container instance host device.
hostPath (string) --
The path for the device on the host container instance.
containerPath (string) --
The path inside the container at which to expose the host device.
permissions (list) --
The explicit permissions to provide to the container for the device. By default, the container has permissions for read, write, and mknod for the device.
(string) --
initProcessEnabled (boolean) --
Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
sharedMemorySize (integer) --
The value for the size (in MiB) of the /dev/shm volume. This parameter maps to the --shm-size option to docker run.
tmpfs (list) --
The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the --tmpfs option to docker run.
(dict) --
The container path, mount options, and size of the tmpfs mount.
containerPath (string) --
The absolute file path where the tmpfs volume is to be mounted.
size (integer) --
The maximum size (in MiB) of the tmpfs volume.
mountOptions (list) --
The list of tmpfs volume mount options.
Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
(string) --
maxSwap (integer) --
The total amount of swap memory (in MiB) a container can use. This parameter will be translated to the --memory-swap option to docker run where the value would be the sum of the container memory plus the maxSwap value.
If a maxSwap value of 0 is specified, the container will not use swap. Accepted values are 0 or any positive integer. If the maxSwap parameter is omitted, the container will use the swap configuration for the container instance it is running on. A maxSwap value must be set for the swappiness parameter to be used.
swappiness (integer) --
This allows you to tune a container's memory swappiness behavior. A swappiness value of 0 will cause swapping to not happen unless absolutely necessary. A swappiness value of 100 will cause pages to be swapped very aggressively. Accepted values are whole numbers between 0 and 100. If the swappiness parameter is not specified, a default value of 60 is used. If a value is not specified for maxSwap then this parameter is ignored. This parameter maps to the --memory-swappiness option to docker run.
secrets (list) --
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
dependsOn (list) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition. When a dependency is defined for container startup, for container shutdown it is reversed.
For tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
(dict) --
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
containerName (string) --
The name of a container.
condition (string) --
The dependency condition of the container. The following are the available conditions and their behavior:
START - This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.
COMPLETE - This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.
SUCCESS - This condition is the same as COMPLETE, but it also requires that the container exits with a zero status. This condition can't be set on an essential container.
HEALTHY - This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
startTimeout (integer) --
Time duration (in seconds) to wait before giving up on resolving dependencies for a container. For example, you specify two containers in a task definition with containerA having a dependency on containerB reaching a COMPLETE, SUCCESS, or HEALTHY status. If a startTimeout value is specified for containerB and it doesn't reach the desired status within that time then containerA gives up and not start. This results in the task transitioning to a STOPPED state.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks using the EC2 launch type, your container instances require at least version 1.26.0 of the container agent to use a container start timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
stopTimeout (integer) --
Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
For tasks using the Fargate launch type, the task or service requires the following platforms:
Linux platform version 1.3.0 or later.
Windows platform version 1.0.0 or later.
For tasks that use the Fargate launch type, the max stop timeout value is 120 seconds and if the parameter is not specified, the default value of 30 seconds is used.
For tasks that use the EC2 launch type, if the stopTimeout parameter isn't specified, the value set for the Amazon ECS container agent configuration variable ECS_CONTAINER_STOP_TIMEOUT is used. If neither the stopTimeout parameter or the ECS_CONTAINER_STOP_TIMEOUT agent configuration variable are set, then the default values of 30 seconds for Linux containers and 30 seconds on Windows containers are used. Your container instances require at least version 1.26.0 of the container agent to use a container stop timeout value. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
The valid values for Fargate are 2-120 seconds.
versionConsistency (string) --
Specifies whether Amazon ECS will resolve the container image tag provided in the container definition to an image digest. By default, the value is enabled. If you set the value for a container as disabled, Amazon ECS will not resolve the provided container image tag to a digest and will use the original image URI specified in the container definition for deployment. For more information about container image resolution, see Container image resolution in the Amazon ECS Developer Guide.
hostname (string) --
The hostname to use for your container. This parameter maps to Hostname in the docker container create command and the --hostname option to docker run.
user (string) --
The user to use inside the container. This parameter maps to User in the docker container create command and the --user option to docker run.
You can specify the user using the following formats. If specifying a UID or GID, you must specify it as a positive integer.
user
user:group
uid
uid:gid
user:gid
uid:group
workingDirectory (string) --
The working directory to run commands inside the container in. This parameter maps to WorkingDir in the docker container create command and the --workdir option to docker run.
disableNetworking (boolean) --
When this parameter is true, networking is off within the container. This parameter maps to NetworkDisabled in the docker container create command.
privileged (boolean) --
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the docker container create command and the --privileged option to docker run
readonlyRootFilesystem (boolean) --
When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the docker container create command and the --read-only option to docker run.
dnsServers (list) --
A list of DNS servers that are presented to the container. This parameter maps to Dns in the docker container create command and the --dns option to docker run.
(string) --
dnsSearchDomains (list) --
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the docker container create command and the --dns-search option to docker run.
(string) --
extraHosts (list) --
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the docker container create command and the --add-host option to docker run.
(dict) --
Hostnames and IP address entries that are added to the /etc/hosts file of a container via the extraHosts parameter of its ContainerDefinition.
hostname (string) --
The hostname to use in the /etc/hosts entry.
ipAddress (string) --
The IP address to use in the /etc/hosts entry.
dockerSecurityOptions (list) --
A list of strings to provide custom configuration for multiple security systems. This field isn't valid for containers in tasks using the Fargate launch type.
For Linux tasks on EC2, this parameter can be used to reference custom labels for SELinux and AppArmor multi-level security systems.
For any tasks on EC2, this parameter can be used to reference a credential spec file that configures a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt in the docker container create command and the --security-opt option to docker run.
Valid values: "no-new-privileges" | "apparmor:PROFILE" | "label:value" | "credentialspec:CredentialSpecFilePath"
(string) --
interactive (boolean) --
When this parameter is true, you can deploy containerized applications that require stdin or a tty to be allocated. This parameter maps to OpenStdin in the docker container create command and the --interactive option to docker run.
pseudoTerminal (boolean) --
When this parameter is true, a TTY is allocated. This parameter maps to Tty in the docker container create command and the --tty option to docker run.
dockerLabels (dict) --
A key/value map of labels to add to the container. This parameter maps to Labels in the docker container create command and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
ulimits (list) --
A list of ulimits to set in the container. If a ulimit value is specified in a task definition, it overrides the default values set by Docker. This parameter maps to Ulimits in the docker container create command and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(dict) --
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which Fargate overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.
name (string) --
The type of the ulimit.
softLimit (integer) --
The soft limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
hardLimit (integer) --
The hard limit for the ulimit type. The value can be specified in bytes, seconds, or as a count, depending on the type of the ulimit.
logConfiguration (dict) --
The log configuration specification for the container.
This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container can use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options).
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
healthCheck (dict) --
The container health check command and associated configuration parameters for the container. This parameter maps to HealthCheck in the docker container create command and the HEALTHCHECK parameter of docker run.
command (list) --
A string array representing the command that the container runs to determine if it is healthy. The string array must start with CMD to run the command arguments directly, or CMD-SHELL to run the command with the container's default shell.
When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]
You don't include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1
An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see HealthCheck in the docker container create command.
(string) --
interval (integer) --
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a command.
timeout (integer) --
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a command.
retries (integer) --
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a command.
startPeriod (integer) --
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the startPeriod is off. This value applies only when you specify a command.
systemControls (list) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
(dict) --
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
For tasks that use the awsvpc network mode including Fargate, if you set systemControls for any container, it applies to all containers in the task. If you set different systemControls for multiple containers in a single task, the container that's started last determines which systemControls take effect.
For tasks that use the host network mode, the network namespace systemControls aren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the host IPC mode, IPC namespace systemControls aren't supported.
For tasks that use the task IPC mode, IPC namespace systemControls values apply to all containers within a task.
namespace (string) --
The namespaced kernel parameter to set a value for.
value (string) --
The namespaced kernel parameter to set a value for.
Valid IPC namespace values: "kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", and Sysctls that start with "fs.mqueue.*"
Valid network namespace values: Sysctls that start with "net.*". Only namespaced Sysctls that exist within the container starting with "net.* are accepted.
All of these values are supported by Fargate.
resourceRequirements (list) --
The type and amount of a resource to assign to a container. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
firelensConfiguration (dict) --
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
type (string) --
The log router to use. The valid values are fluentd or fluentbit.
options (dict) --
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is "options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide.
(string) --
(string) --
credentialSpecs (list) --
A list of ARNs in SSM or Amazon S3 to a credential spec ( CredSpec) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the dockerSecurityOptions. The maximum number of ARNs is 1.
There are two formats for each ARN.
credentialspecdomainless:MyARN
You use credentialspecdomainless:MyARN to provide a CredSpec with an additional section for a secret in Secrets Manager. You provide the login credentials to the domain in the secret.
Each task that runs on any container instance can join different domains.
You can use this format without joining the container instance to a domain.
credentialspec:MyARN
You use credentialspec:MyARN to provide a CredSpec for a single domain.
You must join the container instance to the domain before you start any tasks that use this task definition.
In both formats, replace MyARN with the ARN in SSM or Amazon S3.
If you provide a credentialspecdomainless:MyARN, the credspec must provide a ARN in Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see Using gMSAs for Windows Containers and Using gMSAs for Linux Containers.
(string) --
family (string) --
The name of a family that this task definition is registered to. Up to 255 characters are allowed. Letters (both uppercase and lowercase letters), numbers, hyphens (-), and underscores (_) are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
taskRoleArn (string) --
The short name or full Amazon Resource Name (ARN) of the Identity and Access Management role that grants containers in the task permission to call Amazon Web Services APIs on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see IAM roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
networkMode (string) --
The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. If no network mode is specified, the default is bridge.
For Amazon ECS tasks on Fargate, the awsvpc network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, <default> or awsvpc can be used. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached elastic network interface port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide.
If the network mode is host, you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.
revision (integer) --
The revision of the task in a particular family. The revision is a version number of a task definition in a family. When you register a task definition for the first time, the revision is 1. Each time that you register a new revision of a task definition in the same family, the revision value always increases by one. This is even if you deregistered previous revisions in this family.
volumes (list) --
The list of data volume definitions for the task. For more information, see Using data volumes in tasks in the Amazon Elastic Container Service Developer Guide.
(dict) --
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.
name (string) --
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the name is required and must also be specified as the volume name in the ServiceVolumeConfiguration or TaskVolumeConfiguration parameter when creating your service or standalone task.
For all other types of volumes, this name is referenced in the sourceVolume parameter of the mountPoints object in the container definition.
When a volume is using the efsVolumeConfiguration, the name is required.
host (dict) --
This parameter is specified when you use bind mount host volumes. The contents of the host parameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my\path:C:\my\path or D:\:C:\my\path.
sourcePath (string) --
When the host parameter is used, specify a sourcePath to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the host parameter contains a sourcePath file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the sourcePath value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
If you're using the Fargate launch type, the sourcePath parameter is not supported.
dockerVolumeConfiguration (dict) --
This parameter is specified when you use Docker volumes.
Windows containers only support the use of the local driver. To use bind mounts, specify the host parameter instead.
scope (string) --
The scope for the Docker volume that determines its lifecycle. Docker volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision (boolean) --
If this value is true, the Docker volume is created if it doesn't already exist.
driver (string) --
The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. This parameter maps to Driver in the docker container create command and the xxdriver option to docker volume create.
driverOpts (dict) --
A map of Docker driver-specific options passed through. This parameter maps to DriverOpts in the docker create-volume command and the xxopt option to docker volume create.
(string) --
(string) --
labels (dict) --
Custom metadata to add to your Docker volume. This parameter maps to Labels in the docker container create command and the xxlabel option to docker volume create.
(string) --
(string) --
efsVolumeConfiguration (dict) --
This parameter is specified when you use an Amazon Elastic File System file system for task storage.
fileSystemId (string) --
The Amazon EFS file system ID to use.
rootDirectory (string) --
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume will be used. Specifying / will have the same effect as omitting this parameter.
transitEncryption (string) --
Determines whether to use encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. Transit encryption must be turned on if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of DISABLED is used. For more information, see Encrypting data in transit in the Amazon Elastic File System User Guide.
transitEncryptionPort (integer) --
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you do not specify a transit encryption port, it will use the port selection strategy that the Amazon EFS mount helper uses. For more information, see EFS mount helper in the Amazon Elastic File System User Guide.
authorizationConfig (dict) --
The authorization configuration details for the Amazon EFS file system.
accessPointId (string) --
The Amazon EFS access point ID to use. If an access point is specified, the root directory value specified in the EFSVolumeConfiguration must either be omitted or set to / which will enforce the path set on the EFS access point. If an access point is used, transit encryption must be on in the EFSVolumeConfiguration. For more information, see Working with Amazon EFS access points in the Amazon Elastic File System User Guide.
iam (string) --
Determines whether to use the Amazon ECS task role defined in a task definition when mounting the Amazon EFS file system. If it is turned on, transit encryption must be turned on in the EFSVolumeConfiguration. If this parameter is omitted, the default value of DISABLED is used. For more information, see Using Amazon EFS access points in the Amazon Elastic Container Service Developer Guide.
fsxWindowsFileServerVolumeConfiguration (dict) --
This parameter is specified when you use Amazon FSx for Windows File Server file system for task storage.
fileSystemId (string) --
The Amazon FSx for Windows File Server file system ID to use.
rootDirectory (string) --
The directory within the Amazon FSx for Windows File Server file system to mount as the root directory inside the host.
authorizationConfig (dict) --
The authorization configuration details for the Amazon FSx for Windows File Server file system.
credentialsParameter (string) --
The authorization credential option to use. The authorization credential options can be provided using either the Amazon Resource Name (ARN) of an Secrets Manager secret or SSM Parameter Store parameter. The ARN refers to the stored credentials.
domain (string) --
A fully qualified domain name hosted by an Directory Service Managed Microsoft AD (Active Directory) or self-hosted AD on Amazon EC2.
configuredAtLaunch (boolean) --
Indicates whether the volume should be configured at launch time. This is used to create Amazon EBS volumes for standalone tasks or tasks created as part of a service. Each task definition revision may only have one volume configured at launch in the volume configuration.
To configure a volume at launch time, use this task definition revision and specify a volumeConfigurations object when calling the CreateService, UpdateService, RunTask or StartTask APIs.
status (string) --
The status of the task definition.
requiresAttributes (list) --
The container instance attributes required by your task. When an Amazon EC2 instance is registered to your cluster, the Amazon ECS container agent assigns some standard attributes to the instance. You can apply custom attributes. These are specified as key-value pairs using the Amazon ECS console or the PutAttributes API. These attributes are used when determining task placement for tasks hosted on Amazon EC2 instances. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
placementConstraints (list) --
An array of placement constraint objects to use for tasks.
(dict) --
The constraint on task placement in the task definition. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. The MemberOf constraint restricts selection to be from a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
compatibilities (list) --
Amazon ECS validates the task definition parameters with those supported by the launch type. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
runtimePlatform (dict) --
The operating system that your task definitions are running on. A platform family is specified only for tasks using the Fargate launch type.
When you specify a task in a service, this value must match the runtimePlatform value of the service.
cpuArchitecture (string) --
The CPU architecture.
You can run your Linux tasks on an ARM-based platform by setting the value to ARM64. This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.
operatingSystemFamily (string) --
The operating system.
requiresCompatibilities (list) --
The task launch types the task definition was validated against. The valid values are EC2, FARGATE, and EXTERNAL. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
(string) --
cpu (string) --
The number of cpu units used by the task. If you use the EC2 launch type, this field is optional. Any value can be used. If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines your range of valid values for the memory parameter.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs).
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount (in MiB) of memory used by the task.
If your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see ContainerDefinition.
If your tasks runs on Fargate, this field is required. You must use one of the following values. The value you choose determines your range of valid values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
pidMode (string) --
The process namespace to use for the containers in the task. The valid values are host or task. On Fargate for Linux containers, the only valid value is task. For example, monitoring sidecars might need pidMode to access information about other containers running in the same task.
If host is specified, all containers within the tasks that specified the host PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.
If task is specified, all containers within the specified task share the same process namespace.
If no value is specified, the default is a private namespace for each container.
If the host PID mode is used, there's a heightened risk of undesired process namespace exposure.
ipcMode (string) --
The IPC resource namespace to use for the containers in the task. The valid values are host, task, or none. If host is specified, then all containers within the tasks that specified the host IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If task is specified, all containers within the specified task share the same IPC resources. If none is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.
If the host IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.
If you are setting namespaced kernel parameters using systemControls for the containers in the task, the following will apply to your IPC resource namespace. For more information, see System Controls in the Amazon Elastic Container Service Developer Guide.
For tasks that use the host IPC mode, IPC namespace related systemControls are not supported.
For tasks that use the task IPC mode, IPC namespace related systemControls will apply to all containers within a task.
proxyConfiguration (dict) --
The configuration details for the App Mesh proxy.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the ecs-init package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version 20190301 or later, they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
type (string) --
The proxy type. The only supported value is APPMESH.
containerName (string) --
The name of the container that will serve as the App Mesh proxy.
properties (list) --
The set of network configuration parameters to provide the Container Network Interface (CNI) plugin, specified as key-value pairs.
IgnoredUID - (Required) The user ID (UID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredGID is specified, this field can be empty.
IgnoredGID - (Required) The group ID (GID) of the proxy container as defined by the user parameter in a container definition. This is used to ensure the proxy ignores its own traffic. If IgnoredUID is specified, this field can be empty.
AppPorts - (Required) The list of ports that the application uses. Network traffic to these ports is forwarded to the ProxyIngressPort and ProxyEgressPort.
ProxyIngressPort - (Required) Specifies the port that incoming traffic to the AppPorts is directed to.
ProxyEgressPort - (Required) Specifies the port that outgoing traffic from the AppPorts is directed to.
EgressIgnoredPorts - (Required) The egress traffic going to the specified ports is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
EgressIgnoredIPs - (Required) The egress traffic going to the specified IP addresses is ignored and not redirected to the ProxyEgressPort. It can be an empty list.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
registeredAt (datetime) --
The Unix timestamp for the time when the task definition was registered.
deregisteredAt (datetime) --
The Unix timestamp for the time when the task definition was deregistered.
registeredBy (string) --
The principal that registered the task definition.
ephemeralStorage (dict) --
The ephemeral storage settings to use for tasks run with the task definition.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
enableFaultInjection (boolean) --
Enables fault injection and allows for fault injection requests to be accepted from the task's containers. The default value is false.
tags (list) --
The list of tags associated with the task definition.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
{'launchType': {'MANAGED_INSTANCES'}}Response
{'tasks': {'launchType': {'MANAGED_INSTANCES'}}}
Starts a new task using the specified task definition.
You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
Alternatively, you can use StartTask to use your own scheduler or place tasks manually on specific container instances.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
The Amazon ECS API follows an eventual consistency model. This is because of the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your Amazon ECS resources might not be immediately visible to all subsequent commands you run. Keep this in mind when you carry out an API command that immediately follows a previous API command.
To manage eventual consistency, you can do the following:
Confirm the state of the resource before you run a command to modify it. Run the DescribeTasks command using an exponential backoff algorithm to ensure that you allow enough time for the previous command to propagate through the system. To do this, run the DescribeTasks command repeatedly, starting with a couple of seconds of wait time and increasing gradually up to five minutes of wait time.
Add wait time between subsequent commands, even if the DescribeTasks command returns an accurate response. Apply an exponential backoff algorithm starting with a couple of seconds of wait time, and increase gradually up to about five minutes of wait time.
If you get a ConflictException error, the RunTask request could not be processed due to conflicts. The provided clientToken is already in use with a different RunTask request. The resourceIds are the existing task ARNs which are already associated with the clientToken.
To fix this issue:
Run RunTask with a unique clientToken.
Run RunTask with the clientToken and the original set of parameters
If you get a ClientException``error, the ``RunTask could not be processed because you use managed scaling and there is a capacity error because the quota of tasks in the PROVISIONING per cluster has been reached. For information about the service quotas, see Amazon ECS service quotas.
See also: AWS API Documentation
Request Syntax
client.run_task( capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], cluster='string', count=123, enableECSManagedTags=True|False, enableExecuteCommand=True|False, group='string', launchType='EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, overrides={ 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], platformVersion='string', propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', referenceId='string', startedBy='string', tags=[ { 'key': 'string', 'value': 'string' }, ], taskDefinition='string', clientToken='string', volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'terminationPolicy': { 'deleteOnTermination': True|False }, 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ] )
list
The capacity provider strategy to use for the task.
If a capacityProviderStrategy is specified, the launchType parameter must be omitted. If no capacityProviderStrategy or launchType is specified, the defaultCapacityProviderStrategy for the cluster is used.
When you use cluster auto scaling, you must specify capacityProviderStrategy and not launchType.
A capacity provider strategy can contain a maximum of 20 capacity providers.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
string
The short name or full Amazon Resource Name (ARN) of the cluster to run your task on. If you do not specify a cluster, the default cluster is assumed.
Each account receives a default cluster the first time you use the service, but you may also create other clusters.
integer
The number of instantiations of the specified task to place on your cluster. You can specify up to 10 tasks for each call.
boolean
Specifies whether to use Amazon ECS managed tags for the task. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
boolean
Determines whether to use the execute command functionality for the containers in this task. If true, this enables execute command functionality on all containers in the task.
If true, then the task definition must have a task role, or you must provide one as an override.
string
The name of the task group to associate with the task. The default value is the family name of the task definition (for example, family:my-family-name).
string
The infrastructure to run your standalone task on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
The FARGATE launch type runs your tasks on Fargate On-Demand infrastructure.
The EC2 launch type runs your tasks on Amazon EC2 instances registered to your cluster.
The EXTERNAL launch type runs your tasks on your on-premises server or virtual machine (VM) capacity registered to your cluster.
A task can use either a launch type or a capacity provider strategy. If a launchType is specified, the capacityProviderStrategy parameter must be omitted.
When you use cluster auto scaling, you must specify capacityProviderStrategy and not launchType.
dict
The network configuration for the task. This parameter is required for task definitions that use the awsvpc network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the Amazon Elastic Container Service Developer Guide.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
dict
A list of container overrides in JSON format that specify the name of a container in the specified task definition and the overrides it should receive. You can override the default command for a container (that's specified in the task definition or Docker image) with a command override. You can also override existing environment variables (that are specified in the task definition or Docker image) on a container or add new environment variables to it with an environment override.
A total of 8192 characters are allowed for overrides. This limit includes the JSON formatting characters of the override structure.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) -- [REQUIRED]
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) -- [REQUIRED]
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) -- [REQUIRED]
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) -- [REQUIRED]
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
list
An array of placement constraint objects to use for the task. You can specify up to 10 constraints for each task (including constraints in the task definition and those specified at runtime).
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The placement strategy objects to use for the task. You can specify a maximum of 5 strategy rules for each task.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
string
The platform version the task uses. A platform version is only specified for tasks hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
string
Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
string
This parameter is only used by Amazon ECS. It is not intended for use by customers.
string
An optional tag specified when a task is started. For example, if you automatically trigger a task to run a batch process job, you could apply a unique identifier for that job to your task with the startedBy parameter. You can then identify which tasks belong to that job by filtering the results of a ListTasks call with the startedBy value. Up to 128 letters (uppercase and lowercase), numbers, hyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, then the startedBy parameter contains the deployment ID of the service that starts it.
list
The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
string
[REQUIRED]
The family and revision ( family:revision) or full ARN of the task definition to run. If a revision isn't specified, the latest ACTIVE revision is used.
The full ARN value must match the value that you specified as the Resource of the principal's permissions policy.
When you specify a task definition, you must either specify a specific revision, or all revisions in the ARN.
To specify a specific revision, include the revision number in the ARN. For example, to specify revision 2, use arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:2.
To specify all revisions, use the wildcard (*) in the ARN. For example, to specify all revisions, use arn:aws:ecs:us-east-1:111122223333:task-definition/TaskFamilyName:*.
For more information, see Policy Resources for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
string
An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 64 characters are allowed. The valid characters are characters in the range of 33-126, inclusive. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
list
The details of the volume that was configuredAtLaunch. You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in in TaskManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
Configuration settings for the task volume that was configuredAtLaunch that weren't set during RegisterTaskDef.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to a task, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing Amazon EBS volume to create a new volume for attachment to the task. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
terminationPolicy (dict) --
The termination policy for the volume when the task exits. This provides a way to control whether Amazon ECS terminates the Amazon EBS volume when the task stops.
deleteOnTermination (boolean) -- [REQUIRED]
Indicates whether the volume should be deleted on when the task stops. If a value of true is specified, Amazon ECS deletes the Amazon EBS volume on your behalf when the task goes into the STOPPED state. If no value is specified, the default value is true is used. When set to false, Amazon ECS leaves the volume in your account.
filesystemType (string) --
The Linux filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
dict
Response Syntax
{ 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
tasks (list) --
A full description of the tasks that were run. The tasks that were successfully placed on your cluster are described here.
(dict) --
Details on a task in a cluster.
attachments (list) --
The Elastic Network Adapter that's associated with the task if the task uses the awsvpc network mode.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface, Service Connect, and AmazonElasticBlockStorage.
status (string) --
The status of the attachment. Valid values are PRECREATED, CREATED, ATTACHING, ATTACHED, DETACHING, DETACHED, DELETED, and FAILED.
details (list) --
Details of the attachment.
For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
For Service Connect services, this includes portName, clientAliases, discoveryName, and ingressPortOverride.
For Elastic Block Storage, this includes roleArn, deleteOnTermination, volumeName, volumeId, and statusReason (only when the attachment fails to create or attach).
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attributes (list) --
The attributes of the task
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
availabilityZone (string) --
The Availability Zone for the task.
capacityProviderName (string) --
The capacity provider that's associated with the task.
clusterArn (string) --
The ARN of the cluster that hosts the task.
connectivity (string) --
The connectivity status of a task.
connectivityAt (datetime) --
The Unix timestamp for the time when the task last went into CONNECTED status.
containerInstanceArn (string) --
The ARN of the container instances that host the task.
containers (list) --
The containers that's associated with the task.
(dict) --
A Docker container that's part of a task.
containerArn (string) --
The Amazon Resource Name (ARN) of the container.
taskArn (string) --
The ARN of the task.
name (string) --
The name of the container.
image (string) --
The image used for the container.
imageDigest (string) --
The container image manifest digest.
runtimeId (string) --
The ID of the Docker container.
lastStatus (string) --
The last known status of the container.
exitCode (integer) --
The exit code returned from the container.
reason (string) --
A short (1024 max characters) human-readable string to provide additional details about a running or stopped container.
networkBindings (list) --
The network bindings associated with the container.
(dict) --
Details on the network bindings between a container and its host container instance. After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
bindIP (string) --
The IP address that the container is bound to on the container instance.
containerPort (integer) --
The port number on the container that's used with the network binding.
hostPort (integer) --
The port number on the host that's used with the network binding.
protocol (string) --
The protocol used for the network binding.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
hostPortRange (string) --
The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent.
networkInterfaces (list) --
The network interfaces associated with the container.
(dict) --
An object representing the elastic network interface for tasks that use the awsvpc network mode.
attachmentId (string) --
The attachment ID for the network interface.
privateIpv4Address (string) --
The private IPv4 address for the network interface.
ipv6Address (string) --
The private IPv6 address for the network interface.
healthStatus (string) --
The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as UNKNOWN.
managedAgents (list) --
The details of any Amazon ECS managed agents associated with the container.
(dict) --
Details about the managed agent status for the container.
lastStartedAt (datetime) --
The Unix timestamp for the time when the managed agent was last started.
name (string) --
The name of the managed agent. When the execute command feature is turned on, the managed agent name is ExecuteCommandAgent.
reason (string) --
The reason for why the managed agent is in the state it is in.
lastStatus (string) --
The last known status of the managed agent.
cpu (string) --
The number of CPU units set for the container. The value is 0 if no value was specified in the container definition when the task definition was registered.
memory (string) --
The hard limit (in MiB) of memory set for the container.
memoryReservation (string) --
The soft limit (in MiB) of memory set for the container.
gpuIds (list) --
The IDs of each GPU assigned to the container.
(string) --
cpu (string) --
The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, 1024). It can also be expressed as a string using vCPUs (for example, 1 vCPU or 1 vcpu). String values are converted to an integer that indicates the CPU units when the task definition is registered.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs). If you do not specify a value, the parameter is ignored.
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
createdAt (datetime) --
The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the PENDING state.
desiredStatus (string) --
The desired status of the task. For more information, see Task Lifecycle.
enableExecuteCommand (boolean) --
Determines whether execute command functionality is turned on for this task. If true, execute command functionality is turned on all the containers in the task.
executionStoppedAt (datetime) --
The Unix timestamp for the time when the task execution stopped.
group (string) --
The name of the task group that's associated with the task.
healthStatus (string) --
The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as HEALTHY, the task status also reports as HEALTHY. If any essential containers in the task are reporting as UNHEALTHY or UNKNOWN, the task status also reports as UNHEALTHY or UNKNOWN.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
lastStatus (string) --
The last known status for the task. For more information, see Task Lifecycle.
launchType (string) --
The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, 1024). If it's expressed as a string using GB (for example, 1GB or 1 GB), it's converted to an integer indicating the MiB when the task definition is registered.
If you use the EC2 launch type, this field is optional.
If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
overrides (dict) --
One or more container overrides.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
platformVersion (string) --
The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX.).
pullStartedAt (datetime) --
The Unix timestamp for the time when the container image pull began.
pullStoppedAt (datetime) --
The Unix timestamp for the time when the container image pull completed.
startedAt (datetime) --
The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the PENDING state to the RUNNING state.
startedBy (string) --
The tag specified when a task is started. If an Amazon ECS service started the task, the startedBy parameter contains the deployment ID of that service.
stopCode (string) --
The stop code indicating why a task was stopped. The stoppedReason might contain additional details.
For more information about stop code, see Stopped tasks error codes in the Amazon ECS Developer Guide.
stoppedAt (datetime) --
The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the RUNNING state to the STOPPED state.
stoppedReason (string) --
The reason that the task was stopped.
stoppingAt (datetime) --
The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPING.
tags (list) --
The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
taskArn (string) --
The Amazon Resource Name (ARN) of the task.
taskDefinitionArn (string) --
The ARN of the task definition that creates the task.
version (integer) --
The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the detail object) to verify that the version in your event stream is current.
ephemeralStorage (dict) --
The ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is 20 GiB and the maximum supported value is
200 GiB.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
failures (list) --
Any failures associated with the call.
For information about how to address failures, see Service event messages and API failure reasons in the Amazon Elastic Container Service Developer Guide.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'tasks': {'launchType': {'MANAGED_INSTANCES'}}}
Starts a new task from the specified task definition on the specified container instance or instances.
Alternatively, you can use RunTask to place tasks for you. For more information, see Scheduling Tasks in the Amazon Elastic Container Service Developer Guide.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.start_task( cluster='string', containerInstances=[ 'string', ], enableECSManagedTags=True|False, enableExecuteCommand=True|False, group='string', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, overrides={ 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', referenceId='string', startedBy='string', tags=[ { 'key': 'string', 'value': 'string' }, ], taskDefinition='string', volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'terminationPolicy': { 'deleteOnTermination': True|False }, 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster where to start your task. If you do not specify a cluster, the default cluster is assumed.
list
[REQUIRED]
The container instance IDs or full ARN entries for the container instances where you would like to place your task. You can specify up to 10 container instances.
(string) --
boolean
Specifies whether to use Amazon ECS managed tags for the task. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
boolean
Whether or not the execute command functionality is turned on for the task. If true, this turns on the execute command functionality on all containers in the task.
string
The name of the task group to associate with the task. The default value is the family name of the task definition (for example, family:my-family-name).
dict
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
dict
A list of container overrides in JSON format that specify the name of a container in the specified task definition and the overrides it receives. You can override the default command for a container (that's specified in the task definition or Docker image) with a command override. You can also override existing environment variables (that are specified in the task definition or Docker image) on a container or add new environment variables to it with an environment override.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) -- [REQUIRED]
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) -- [REQUIRED]
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) -- [REQUIRED]
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) -- [REQUIRED]
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) -- [REQUIRED]
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
string
Specifies whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
string
This parameter is only used by Amazon ECS. It is not intended for use by customers.
string
An optional tag specified when a task is started. For example, if you automatically trigger a task to run a batch process job, you could apply a unique identifier for that job to your task with the startedBy parameter. You can then identify which tasks belong to that job by filtering the results of a ListTasks call with the startedBy value. Up to 36 letters (uppercase and lowercase), numbers, hyphens (-), forward slash (/), and underscores (_) are allowed.
If a task is started by an Amazon ECS service, the startedBy parameter contains the deployment ID of the service that starts it.
list
The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
string
[REQUIRED]
The family and revision ( family:revision) or full ARN of the task definition to start. If a revision isn't specified, the latest ACTIVE revision is used.
list
The details of the volume that was configuredAtLaunch. You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in TaskManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
Configuration settings for the task volume that was configuredAtLaunch that weren't set during RegisterTaskDef.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to a task, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing Amazon EBS volume to create a new volume for attachment to the task. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
terminationPolicy (dict) --
The termination policy for the volume when the task exits. This provides a way to control whether Amazon ECS terminates the Amazon EBS volume when the task stops.
deleteOnTermination (boolean) -- [REQUIRED]
Indicates whether the volume should be deleted on when the task stops. If a value of true is specified, Amazon ECS deletes the Amazon EBS volume on your behalf when the task goes into the STOPPED state. If no value is specified, the default value is true is used. When set to false, Amazon ECS leaves the volume in your account.
filesystemType (string) --
The Linux filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
dict
Response Syntax
{ 'tasks': [ { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
tasks (list) --
A full description of the tasks that were started. Each task that was successfully placed on your container instances is described.
(dict) --
Details on a task in a cluster.
attachments (list) --
The Elastic Network Adapter that's associated with the task if the task uses the awsvpc network mode.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface, Service Connect, and AmazonElasticBlockStorage.
status (string) --
The status of the attachment. Valid values are PRECREATED, CREATED, ATTACHING, ATTACHED, DETACHING, DETACHED, DELETED, and FAILED.
details (list) --
Details of the attachment.
For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
For Service Connect services, this includes portName, clientAliases, discoveryName, and ingressPortOverride.
For Elastic Block Storage, this includes roleArn, deleteOnTermination, volumeName, volumeId, and statusReason (only when the attachment fails to create or attach).
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attributes (list) --
The attributes of the task
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
availabilityZone (string) --
The Availability Zone for the task.
capacityProviderName (string) --
The capacity provider that's associated with the task.
clusterArn (string) --
The ARN of the cluster that hosts the task.
connectivity (string) --
The connectivity status of a task.
connectivityAt (datetime) --
The Unix timestamp for the time when the task last went into CONNECTED status.
containerInstanceArn (string) --
The ARN of the container instances that host the task.
containers (list) --
The containers that's associated with the task.
(dict) --
A Docker container that's part of a task.
containerArn (string) --
The Amazon Resource Name (ARN) of the container.
taskArn (string) --
The ARN of the task.
name (string) --
The name of the container.
image (string) --
The image used for the container.
imageDigest (string) --
The container image manifest digest.
runtimeId (string) --
The ID of the Docker container.
lastStatus (string) --
The last known status of the container.
exitCode (integer) --
The exit code returned from the container.
reason (string) --
A short (1024 max characters) human-readable string to provide additional details about a running or stopped container.
networkBindings (list) --
The network bindings associated with the container.
(dict) --
Details on the network bindings between a container and its host container instance. After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
bindIP (string) --
The IP address that the container is bound to on the container instance.
containerPort (integer) --
The port number on the container that's used with the network binding.
hostPort (integer) --
The port number on the host that's used with the network binding.
protocol (string) --
The protocol used for the network binding.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
hostPortRange (string) --
The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent.
networkInterfaces (list) --
The network interfaces associated with the container.
(dict) --
An object representing the elastic network interface for tasks that use the awsvpc network mode.
attachmentId (string) --
The attachment ID for the network interface.
privateIpv4Address (string) --
The private IPv4 address for the network interface.
ipv6Address (string) --
The private IPv6 address for the network interface.
healthStatus (string) --
The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as UNKNOWN.
managedAgents (list) --
The details of any Amazon ECS managed agents associated with the container.
(dict) --
Details about the managed agent status for the container.
lastStartedAt (datetime) --
The Unix timestamp for the time when the managed agent was last started.
name (string) --
The name of the managed agent. When the execute command feature is turned on, the managed agent name is ExecuteCommandAgent.
reason (string) --
The reason for why the managed agent is in the state it is in.
lastStatus (string) --
The last known status of the managed agent.
cpu (string) --
The number of CPU units set for the container. The value is 0 if no value was specified in the container definition when the task definition was registered.
memory (string) --
The hard limit (in MiB) of memory set for the container.
memoryReservation (string) --
The soft limit (in MiB) of memory set for the container.
gpuIds (list) --
The IDs of each GPU assigned to the container.
(string) --
cpu (string) --
The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, 1024). It can also be expressed as a string using vCPUs (for example, 1 vCPU or 1 vcpu). String values are converted to an integer that indicates the CPU units when the task definition is registered.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs). If you do not specify a value, the parameter is ignored.
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
createdAt (datetime) --
The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the PENDING state.
desiredStatus (string) --
The desired status of the task. For more information, see Task Lifecycle.
enableExecuteCommand (boolean) --
Determines whether execute command functionality is turned on for this task. If true, execute command functionality is turned on all the containers in the task.
executionStoppedAt (datetime) --
The Unix timestamp for the time when the task execution stopped.
group (string) --
The name of the task group that's associated with the task.
healthStatus (string) --
The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as HEALTHY, the task status also reports as HEALTHY. If any essential containers in the task are reporting as UNHEALTHY or UNKNOWN, the task status also reports as UNHEALTHY or UNKNOWN.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
lastStatus (string) --
The last known status for the task. For more information, see Task Lifecycle.
launchType (string) --
The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, 1024). If it's expressed as a string using GB (for example, 1GB or 1 GB), it's converted to an integer indicating the MiB when the task definition is registered.
If you use the EC2 launch type, this field is optional.
If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
overrides (dict) --
One or more container overrides.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
platformVersion (string) --
The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX.).
pullStartedAt (datetime) --
The Unix timestamp for the time when the container image pull began.
pullStoppedAt (datetime) --
The Unix timestamp for the time when the container image pull completed.
startedAt (datetime) --
The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the PENDING state to the RUNNING state.
startedBy (string) --
The tag specified when a task is started. If an Amazon ECS service started the task, the startedBy parameter contains the deployment ID of that service.
stopCode (string) --
The stop code indicating why a task was stopped. The stoppedReason might contain additional details.
For more information about stop code, see Stopped tasks error codes in the Amazon ECS Developer Guide.
stoppedAt (datetime) --
The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the RUNNING state to the STOPPED state.
stoppedReason (string) --
The reason that the task was stopped.
stoppingAt (datetime) --
The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPING.
tags (list) --
The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
taskArn (string) --
The Amazon Resource Name (ARN) of the task.
taskDefinitionArn (string) --
The ARN of the task definition that creates the task.
version (integer) --
The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the detail object) to verify that the version in your event stream is current.
ephemeralStorage (dict) --
The ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is 20 GiB and the maximum supported value is
200 GiB.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'task': {'launchType': {'MANAGED_INSTANCES'}}}
Stops a running task. Any tags associated with the task will be deleted.
When you call StopTask on a task, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM value and a default 30-second timeout, after which the SIGKILL value is sent and the containers are forcibly stopped. If the container handles the SIGTERM value gracefully and exits within 30 seconds from receiving it, no SIGKILL value is sent.
For Windows containers, POSIX signals do not work and runtime stops the container by sending a CTRL_SHUTDOWN_EVENT. For more information, see Unable to react to graceful shutdown of (Windows) container #25982 on GitHub.
See also: AWS API Documentation
Request Syntax
client.stop_task( cluster='string', task='string', reason='string' )
string
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the task to stop. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
Thefull Amazon Resource Name (ARN) of the task.
string
An optional message specified when a task is stopped. For example, if you're using a custom scheduler, you can use this parameter to specify the reason for stopping the task here, and the message appears in subsequent DescribeTasks> API operations on this task.
dict
Response Syntax
{ 'task': { 'attachments': [ { 'id': 'string', 'type': 'string', 'status': 'string', 'details': [ { 'name': 'string', 'value': 'string' }, ] }, ], 'attributes': [ { 'name': 'string', 'value': 'string', 'targetType': 'container-instance', 'targetId': 'string' }, ], 'availabilityZone': 'string', 'capacityProviderName': 'string', 'clusterArn': 'string', 'connectivity': 'CONNECTED'|'DISCONNECTED', 'connectivityAt': datetime(2015, 1, 1), 'containerInstanceArn': 'string', 'containers': [ { 'containerArn': 'string', 'taskArn': 'string', 'name': 'string', 'image': 'string', 'imageDigest': 'string', 'runtimeId': 'string', 'lastStatus': 'string', 'exitCode': 123, 'reason': 'string', 'networkBindings': [ { 'bindIP': 'string', 'containerPort': 123, 'hostPort': 123, 'protocol': 'tcp'|'udp', 'containerPortRange': 'string', 'hostPortRange': 'string' }, ], 'networkInterfaces': [ { 'attachmentId': 'string', 'privateIpv4Address': 'string', 'ipv6Address': 'string' }, ], 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'managedAgents': [ { 'lastStartedAt': datetime(2015, 1, 1), 'name': 'ExecuteCommandAgent', 'reason': 'string', 'lastStatus': 'string' }, ], 'cpu': 'string', 'memory': 'string', 'memoryReservation': 'string', 'gpuIds': [ 'string', ] }, ], 'cpu': 'string', 'createdAt': datetime(2015, 1, 1), 'desiredStatus': 'string', 'enableExecuteCommand': True|False, 'executionStoppedAt': datetime(2015, 1, 1), 'group': 'string', 'healthStatus': 'HEALTHY'|'UNHEALTHY'|'UNKNOWN', 'inferenceAccelerators': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'lastStatus': 'string', 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'memory': 'string', 'overrides': { 'containerOverrides': [ { 'name': 'string', 'command': [ 'string', ], 'environment': [ { 'name': 'string', 'value': 'string' }, ], 'environmentFiles': [ { 'value': 'string', 'type': 's3' }, ], 'cpu': 123, 'memory': 123, 'memoryReservation': 123, 'resourceRequirements': [ { 'value': 'string', 'type': 'GPU'|'InferenceAccelerator' }, ] }, ], 'cpu': 'string', 'inferenceAcceleratorOverrides': [ { 'deviceName': 'string', 'deviceType': 'string' }, ], 'executionRoleArn': 'string', 'memory': 'string', 'taskRoleArn': 'string', 'ephemeralStorage': { 'sizeInGiB': 123 } }, 'platformVersion': 'string', 'platformFamily': 'string', 'pullStartedAt': datetime(2015, 1, 1), 'pullStoppedAt': datetime(2015, 1, 1), 'startedAt': datetime(2015, 1, 1), 'startedBy': 'string', 'stopCode': 'TaskFailedToStart'|'EssentialContainerExited'|'UserInitiated'|'ServiceSchedulerInitiated'|'SpotInterruption'|'TerminationNotice', 'stoppedAt': datetime(2015, 1, 1), 'stoppedReason': 'string', 'stoppingAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'taskArn': 'string', 'taskDefinitionArn': 'string', 'version': 123, 'ephemeralStorage': { 'sizeInGiB': 123 }, 'fargateEphemeralStorage': { 'sizeInGiB': 123, 'kmsKeyId': 'string' } } }Response Structure
(dict) --
task (dict) --
The task that was stopped.
attachments (list) --
The Elastic Network Adapter that's associated with the task if the task uses the awsvpc network mode.
(dict) --
An object representing a container instance or task attachment.
id (string) --
The unique identifier for the attachment.
type (string) --
The type of the attachment, such as ElasticNetworkInterface, Service Connect, and AmazonElasticBlockStorage.
status (string) --
The status of the attachment. Valid values are PRECREATED, CREATED, ATTACHING, ATTACHED, DETACHING, DETACHED, DELETED, and FAILED.
details (list) --
Details of the attachment.
For elastic network interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.
For Service Connect services, this includes portName, clientAliases, discoveryName, and ingressPortOverride.
For Elastic Block Storage, this includes roleArn, deleteOnTermination, volumeName, volumeId, and statusReason (only when the attachment fails to create or attach).
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
attributes (list) --
The attributes of the task
(dict) --
An attribute is a name-value pair that's associated with an Amazon ECS object. Use attributes to extend the Amazon ECS data model by adding custom metadata to your resources. For more information, see Attributes in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the attribute. The name must contain between 1 and 128 characters. The name may contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), forward slashes (/), back slashes (), or periods (.).
value (string) --
The value of the attribute. The value must contain between 1 and 128 characters. It can contain letters (uppercase and lowercase), numbers, hyphens (-), underscores (_), periods (.), at signs (@), forward slashes (/), back slashes (), colons (:), or spaces. The value can't start or end with a space.
targetType (string) --
The type of the target to attach the attribute with. This parameter is required if you use the short form ID for a resource instead of the full ARN.
targetId (string) --
The ID of the target. You can specify the short form ID for a resource or the full Amazon Resource Name (ARN).
availabilityZone (string) --
The Availability Zone for the task.
capacityProviderName (string) --
The capacity provider that's associated with the task.
clusterArn (string) --
The ARN of the cluster that hosts the task.
connectivity (string) --
The connectivity status of a task.
connectivityAt (datetime) --
The Unix timestamp for the time when the task last went into CONNECTED status.
containerInstanceArn (string) --
The ARN of the container instances that host the task.
containers (list) --
The containers that's associated with the task.
(dict) --
A Docker container that's part of a task.
containerArn (string) --
The Amazon Resource Name (ARN) of the container.
taskArn (string) --
The ARN of the task.
name (string) --
The name of the container.
image (string) --
The image used for the container.
imageDigest (string) --
The container image manifest digest.
runtimeId (string) --
The ID of the Docker container.
lastStatus (string) --
The last known status of the container.
exitCode (integer) --
The exit code returned from the container.
reason (string) --
A short (1024 max characters) human-readable string to provide additional details about a running or stopped container.
networkBindings (list) --
The network bindings associated with the container.
(dict) --
Details on the network bindings between a container and its host container instance. After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.
bindIP (string) --
The IP address that the container is bound to on the container instance.
containerPort (integer) --
The port number on the container that's used with the network binding.
hostPort (integer) --
The port number on the host that's used with the network binding.
protocol (string) --
The protocol used for the network binding.
containerPortRange (string) --
The port number range on the container that's bound to the dynamically mapped host port range.
The following rules apply when you specify a containerPortRange:
You must use either the bridge network mode or the awsvpc network mode.
This parameter is available for both the EC2 and Fargate launch types.
This parameter is available for both the Linux and Windows operating systems.
The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package
You can specify a maximum of 100 port ranges per container.
You do not specify a hostPortRange. The value of the hostPortRange is set as follows:
For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange. This is a static mapping strategy.
For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.
The containerPortRange valid values are between 1 and 65535.
A port can only be included in one port mapping per container.
You cannot specify overlapping port ranges.
The first port in the range must be less than last port in the range.
Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports. For more information, see Issue #11185 on the Github website. For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide.
You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.
hostPortRange (string) --
The port number range on the host that's used with the network binding. This is assigned is assigned by Docker and delivered by the Amazon ECS agent.
networkInterfaces (list) --
The network interfaces associated with the container.
(dict) --
An object representing the elastic network interface for tasks that use the awsvpc network mode.
attachmentId (string) --
The attachment ID for the network interface.
privateIpv4Address (string) --
The private IPv4 address for the network interface.
ipv6Address (string) --
The private IPv6 address for the network interface.
healthStatus (string) --
The health status of the container. If health checks aren't configured for this container in its task definition, then it reports the health status as UNKNOWN.
managedAgents (list) --
The details of any Amazon ECS managed agents associated with the container.
(dict) --
Details about the managed agent status for the container.
lastStartedAt (datetime) --
The Unix timestamp for the time when the managed agent was last started.
name (string) --
The name of the managed agent. When the execute command feature is turned on, the managed agent name is ExecuteCommandAgent.
reason (string) --
The reason for why the managed agent is in the state it is in.
lastStatus (string) --
The last known status of the managed agent.
cpu (string) --
The number of CPU units set for the container. The value is 0 if no value was specified in the container definition when the task definition was registered.
memory (string) --
The hard limit (in MiB) of memory set for the container.
memoryReservation (string) --
The soft limit (in MiB) of memory set for the container.
gpuIds (list) --
The IDs of each GPU assigned to the container.
(string) --
cpu (string) --
The number of CPU units used by the task as expressed in a task definition. It can be expressed as an integer using CPU units (for example, 1024). It can also be expressed as a string using vCPUs (for example, 1 vCPU or 1 vcpu). String values are converted to an integer that indicates the CPU units when the task definition is registered.
If you're using the EC2 launch type or the external launch type, this field is optional. Supported values are between 128 CPU units ( 0.125 vCPUs) and 196608 CPU units ( 192 vCPUs). If you do not specify a value, the parameter is ignored.
This field is required for Fargate. For information about the valid values, see Task size in the Amazon Elastic Container Service Developer Guide.
createdAt (datetime) --
The Unix timestamp for the time when the task was created. More specifically, it's for the time when the task entered the PENDING state.
desiredStatus (string) --
The desired status of the task. For more information, see Task Lifecycle.
enableExecuteCommand (boolean) --
Determines whether execute command functionality is turned on for this task. If true, execute command functionality is turned on all the containers in the task.
executionStoppedAt (datetime) --
The Unix timestamp for the time when the task execution stopped.
group (string) --
The name of the task group that's associated with the task.
healthStatus (string) --
The health status for the task. It's determined by the health of the essential containers in the task. If all essential containers in the task are reporting as HEALTHY, the task status also reports as HEALTHY. If any essential containers in the task are reporting as UNHEALTHY or UNKNOWN, the task status also reports as UNHEALTHY or UNKNOWN.
inferenceAccelerators (list) --
The Elastic Inference accelerator that's associated with the task.
(dict) --
Details on an Elastic Inference accelerator. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name. The deviceName must also be referenced in a container definition as a ResourceRequirement.
deviceType (string) --
The Elastic Inference accelerator type to use.
lastStatus (string) --
The last known status for the task. For more information, see Task Lifecycle.
launchType (string) --
The infrastructure where your task runs on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The amount of memory (in MiB) that the task uses as expressed in a task definition. It can be expressed as an integer using MiB (for example, 1024). If it's expressed as a string using GB (for example, 1GB or 1 GB), it's converted to an integer indicating the MiB when the task definition is registered.
If you use the EC2 launch type, this field is optional.
If you use the Fargate launch type, this field is required. You must use one of the following values. The value that you choose determines the range of supported values for the cpu parameter.
512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 vCPU)
1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available cpu values: 512 (.5 vCPU)
2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available cpu values: 1024 (1 vCPU)
Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available cpu values: 2048 (2 vCPU)
Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available cpu values: 4096 (4 vCPU)
Between 16 GB and 60 GB in 4 GB increments - Available cpu values: 8192 (8 vCPU) This option requires Linux platform 1.4.0 or later.
Between 32GB and 120 GB in 8 GB increments - Available cpu values: 16384 (16 vCPU) This option requires Linux platform 1.4.0 or later.
overrides (dict) --
One or more container overrides.
containerOverrides (list) --
One or more container overrides that are sent to a task.
(dict) --
The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] }. If a non-empty container override is specified, the name parameter must be included.
You can use Secrets Manager or Amazon Web Services Systems Manager Parameter Store to store the sensitive data. For more information, see Retrieve secrets through environment variables in the Amazon ECS Developer Guide.
name (string) --
The name of the container that receives the override. This parameter is required if any override is specified.
command (list) --
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
(string) --
environment (list) --
The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
(dict) --
A key-value pair object.
name (string) --
The name of the key-value pair. For environment variables, this is the name of the environment variable.
value (string) --
The value of the key-value pair. For environment variables, this is the value of the environment variable.
environmentFiles (list) --
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
(dict) --
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
Linux platform version 1.4.0 or later.
Windows platform version 1.0.0 or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the VARIABLE values.
value (string) --
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
type (string) --
The file type to use. Environment files are objects in Amazon S3. The only supported value is s3.
cpu (integer) --
The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name.
memory (integer) --
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
memoryReservation (integer) --
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.
resourceRequirements (list) --
The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
(dict) --
The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
value (string) --
The value for the specified resource type.
When the type is GPU, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on.
When the type is InferenceAccelerator, the value matches the deviceName for an InferenceAccelerator specified in a task definition.
type (string) --
The type of resource to assign to a container.
cpu (string) --
The CPU override for the task.
inferenceAcceleratorOverrides (list) --
The Elastic Inference accelerator override for the task.
(dict) --
Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide.
deviceName (string) --
The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition.
deviceType (string) --
The Elastic Inference accelerator type to use.
executionRoleArn (string) --
The Amazon Resource Name (ARN) of the task execution role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide.
memory (string) --
The memory override for the task.
taskRoleArn (string) --
The Amazon Resource Name (ARN) of the role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide.
ephemeralStorage (dict) --
The ephemeral storage setting override for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
platformVersion (string) --
The platform version where your task runs on. A platform version is only specified for tasks that use the Fargate launch type. If you didn't specify one, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX.).
pullStartedAt (datetime) --
The Unix timestamp for the time when the container image pull began.
pullStoppedAt (datetime) --
The Unix timestamp for the time when the container image pull completed.
startedAt (datetime) --
The Unix timestamp for the time when the task started. More specifically, it's for the time when the task transitioned from the PENDING state to the RUNNING state.
startedBy (string) --
The tag specified when a task is started. If an Amazon ECS service started the task, the startedBy parameter contains the deployment ID of that service.
stopCode (string) --
The stop code indicating why a task was stopped. The stoppedReason might contain additional details.
For more information about stop code, see Stopped tasks error codes in the Amazon ECS Developer Guide.
stoppedAt (datetime) --
The Unix timestamp for the time when the task was stopped. More specifically, it's for the time when the task transitioned from the RUNNING state to the STOPPED state.
stoppedReason (string) --
The reason that the task was stopped.
stoppingAt (datetime) --
The Unix timestamp for the time when the task stops. More specifically, it's for the time when the task transitions from the RUNNING state to STOPPING.
tags (list) --
The metadata that you apply to the task to help you categorize and organize the task. Each tag consists of a key and an optional value. You define both the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
taskArn (string) --
The Amazon Resource Name (ARN) of the task.
taskDefinitionArn (string) --
The ARN of the task definition that creates the task.
version (integer) --
The version counter for the task. Every time a task experiences a change that starts a CloudWatch event, the version counter is incremented. If you replicate your Amazon ECS task state with CloudWatch Events, you can compare the version of a task reported by the Amazon ECS API actions with the version reported in CloudWatch Events for the task (inside the detail object) to verify that the version in your event stream is current.
ephemeralStorage (dict) --
The ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task.
sizeInGiB (integer) --
The total amount, in GiB, of the ephemeral storage to set for the task. The minimum supported value is 20 GiB and the maximum supported value is
200 GiB.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for the task.
{'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER | NONE'}}Response
{'capacityProvider': {'cluster': 'string', 'managedInstancesProvider': {'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': {'ec2InstanceProfileArn': 'string', 'instanceRequirements': {'acceleratorCount': {'max': 'integer', 'min': 'integer'}, 'acceleratorManufacturers': ['amazon-web-services ' '| ' 'amd ' '| ' 'nvidia ' '| ' 'xilinx ' '| ' 'habana'], 'acceleratorNames': ['a100 ' '| ' 'inferentia ' '| ' 'k520 ' '| ' 'k80 ' '| ' 'm60 ' '| ' 'radeon-pro-v520 ' '| ' 't4 ' '| ' 'vu9p ' '| ' 'v100 ' '| ' 'a10g ' '| ' 'h100 ' '| ' 't4g'], 'acceleratorTotalMemoryMiB': {'max': 'integer', 'min': 'integer'}, 'acceleratorTypes': ['gpu ' '| ' 'fpga ' '| ' 'inference'], 'allowedInstanceTypes': ['string'], 'bareMetal': 'included ' '| ' 'required ' '| ' 'excluded', 'baselineEbsBandwidthMbps': {'max': 'integer', 'min': 'integer'}, 'burstablePerformance': 'included ' '| ' 'required ' '| ' 'excluded', 'cpuManufacturers': ['intel ' '| ' 'amd ' '| ' 'amazon-web-services'], 'excludedInstanceTypes': ['string'], 'instanceGenerations': ['current ' '| ' 'previous'], 'localStorage': 'included ' '| ' 'required ' '| ' 'excluded', 'localStorageTypes': ['hdd ' '| ' 'ssd'], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 'integer', 'memoryGiBPerVCpu': {'max': 'double', 'min': 'double'}, 'memoryMiB': {'max': 'integer', 'min': 'integer'}, 'networkBandwidthGbps': {'max': 'double', 'min': 'double'}, 'networkInterfaceCount': {'max': 'integer', 'min': 'integer'}, 'onDemandMaxPricePercentageOverLowestPrice': 'integer', 'requireHibernateSupport': 'boolean', 'spotMaxPricePercentageOverLowestPrice': 'integer', 'totalLocalStorageGB': {'max': 'double', 'min': 'double'}, 'vCpuCount': {'max': 'integer', 'min': 'integer'}}, 'monitoring': 'BASIC ' '| ' 'DETAILED', 'networkConfiguration': {'securityGroups': ['string'], 'subnets': ['string']}, 'storageConfiguration': {'storageSizeGiB': 'integer'}}, 'propagateTags': 'CAPACITY_PROVIDER ' '| NONE'}, 'status': {'PROVISIONING', 'DEPROVISIONING'}, 'type': 'EC2_AUTOSCALING | MANAGED_INSTANCES | FARGATE | ' 'FARGATE_SPOT', 'updateStatus': {'CREATE_COMPLETE', 'CREATE_FAILED', 'CREATE_IN_PROGRESS'}}}
Modifies the parameters for a capacity provider.
These changes only apply to new Amazon ECS Managed Instances, or EC2 instances, not existing ones.
See also: AWS API Documentation
Request Syntax
client.update_capacity_provider( name='string', cluster='string', autoScalingGroupProvider={ 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, managedInstancesProvider={ 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' } )
string
[REQUIRED]
The name of the capacity provider to update.
string
The name of the cluster that contains the capacity provider to update. Managed instances capacity providers are cluster-scoped and can only be updated within their associated cluster.
dict
An object that represent the parameters to update for the Auto Scaling group capacity provider.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
dict
The updated configuration for the Amazon ECS Managed Instances provider. You can modify the infrastructure role, instance launch template, and tag propagation settings. Changes take effect for new instances launched after the update.
infrastructureRoleArn (string) -- [REQUIRED]
The updated Amazon Resource Name (ARN) of the infrastructure role. The new role must have the necessary permissions to manage instances and access required Amazon Web Services services.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) -- [REQUIRED]
The updated launch template configuration. Changes to the launch template affect new instances launched after the update, while existing instances continue to use their original configuration.
ec2InstanceProfileArn (string) --
The updated Amazon Resource Name (ARN) of the instance profile. The new instance profile must have the necessary permissions for your tasks.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) --
The updated network configuration for Amazon ECS Managed Instances. Changes to subnets and security groups affect new instances launched after the update.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The updated storage configuration for Amazon ECS Managed Instances. Changes to storage settings apply to new instances launched after the update.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The updated instance requirements for attribute-based instance type selection. Changes to instance requirements affect which instance types Amazon ECS selects for new instances.
vCpuCount (dict) -- [REQUIRED]
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) -- [REQUIRED]
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) -- [REQUIRED]
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) -- [REQUIRED]
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
The updated tag propagation setting. When changed, this affects only new instances launched after the update.
dict
Response Syntax
{ 'capacityProvider': { 'capacityProviderArn': 'string', 'name': 'string', 'cluster': 'string', 'status': 'PROVISIONING'|'ACTIVE'|'DEPROVISIONING'|'INACTIVE', 'autoScalingGroupProvider': { 'autoScalingGroupArn': 'string', 'managedScaling': { 'status': 'ENABLED'|'DISABLED', 'targetCapacity': 123, 'minimumScalingStepSize': 123, 'maximumScalingStepSize': 123, 'instanceWarmupPeriod': 123 }, 'managedTerminationProtection': 'ENABLED'|'DISABLED', 'managedDraining': 'ENABLED'|'DISABLED' }, 'managedInstancesProvider': { 'infrastructureRoleArn': 'string', 'instanceLaunchTemplate': { 'ec2InstanceProfileArn': 'string', 'networkConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ] }, 'storageConfiguration': { 'storageSizeGiB': 123 }, 'monitoring': 'BASIC'|'DETAILED', 'instanceRequirements': { 'vCpuCount': { 'min': 123, 'max': 123 }, 'memoryMiB': { 'min': 123, 'max': 123 }, 'cpuManufacturers': [ 'intel'|'amd'|'amazon-web-services', ], 'memoryGiBPerVCpu': { 'min': 123.0, 'max': 123.0 }, 'excludedInstanceTypes': [ 'string', ], 'instanceGenerations': [ 'current'|'previous', ], 'spotMaxPricePercentageOverLowestPrice': 123, 'onDemandMaxPricePercentageOverLowestPrice': 123, 'bareMetal': 'included'|'required'|'excluded', 'burstablePerformance': 'included'|'required'|'excluded', 'requireHibernateSupport': True|False, 'networkInterfaceCount': { 'min': 123, 'max': 123 }, 'localStorage': 'included'|'required'|'excluded', 'localStorageTypes': [ 'hdd'|'ssd', ], 'totalLocalStorageGB': { 'min': 123.0, 'max': 123.0 }, 'baselineEbsBandwidthMbps': { 'min': 123, 'max': 123 }, 'acceleratorTypes': [ 'gpu'|'fpga'|'inference', ], 'acceleratorCount': { 'min': 123, 'max': 123 }, 'acceleratorManufacturers': [ 'amazon-web-services'|'amd'|'nvidia'|'xilinx'|'habana', ], 'acceleratorNames': [ 'a100'|'inferentia'|'k520'|'k80'|'m60'|'radeon-pro-v520'|'t4'|'vu9p'|'v100'|'a10g'|'h100'|'t4g', ], 'acceleratorTotalMemoryMiB': { 'min': 123, 'max': 123 }, 'networkBandwidthGbps': { 'min': 123.0, 'max': 123.0 }, 'allowedInstanceTypes': [ 'string', ], 'maxSpotPriceAsPercentageOfOptimalOnDemandPrice': 123 } }, 'propagateTags': 'CAPACITY_PROVIDER'|'NONE' }, 'updateStatus': 'CREATE_IN_PROGRESS'|'CREATE_COMPLETE'|'CREATE_FAILED'|'DELETE_IN_PROGRESS'|'DELETE_COMPLETE'|'DELETE_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_COMPLETE'|'UPDATE_FAILED', 'updateStatusReason': 'string', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'type': 'EC2_AUTOSCALING'|'MANAGED_INSTANCES'|'FARGATE'|'FARGATE_SPOT' } }
Response Structure
(dict) --
capacityProvider (dict) --
Details about the capacity provider.
capacityProviderArn (string) --
The Amazon Resource Name (ARN) that identifies the capacity provider.
name (string) --
The name of the capacity provider.
cluster (string) --
The cluster that this capacity provider is associated with. Managed instances capacity providers are cluster-scoped, meaning they can only be used within their associated cluster.
status (string) --
The current status of the capacity provider. Only capacity providers in an ACTIVE state can be used in a cluster. When a capacity provider is successfully deleted, it has an INACTIVE status.
autoScalingGroupProvider (dict) --
The Auto Scaling group settings for the capacity provider.
autoScalingGroupArn (string) --
The Amazon Resource Name (ARN) that identifies the Auto Scaling group, or the Auto Scaling group name.
managedScaling (dict) --
The managed scaling settings for the Auto Scaling group capacity provider.
status (string) --
Determines whether to use managed scaling for the capacity provider.
targetCapacity (integer) --
The target capacity utilization as a percentage for the capacity provider. The specified value must be greater than 0 and less than or equal to 100. For example, if you want the capacity provider to maintain 10% spare capacity, then that means the utilization is 90%, so use a targetCapacity of 90. The default value of 100 percent results in the Amazon EC2 instances in your Auto Scaling group being completely used.
minimumScalingStepSize (integer) --
The minimum number of Amazon EC2 instances that Amazon ECS will scale out at one time. The scale in process is not affected by this parameter If this parameter is omitted, the default value of 1 is used.
When additional capacity is required, Amazon ECS will scale up the minimum scaling step size even if the actual demand is less than the minimum scaling step size.
maximumScalingStepSize (integer) --
The maximum number of Amazon EC2 instances that Amazon ECS will scale out at one time. If this parameter is omitted, the default value of 10000 is used.
instanceWarmupPeriod (integer) --
The period of time, in seconds, after a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for Auto Scaling group. If this parameter is omitted, the default value of 300 seconds is used.
managedTerminationProtection (string) --
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection. The default is off.
When managed termination protection is on, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions on as well. For more information, see Instance Protection in the Auto Scaling User Guide.
When managed termination protection is off, your Amazon EC2 instances aren't protected from termination when the Auto Scaling group scales in.
managedDraining (string) --
The managed draining option for the Auto Scaling group capacity provider. When you enable this, Amazon ECS manages and gracefully drains the EC2 container instances that are in the Auto Scaling group capacity provider.
managedInstancesProvider (dict) --
The configuration for the Amazon ECS Managed Instances provider. This includes the infrastructure role, the launch template configuration, and tag propagation settings.
infrastructureRoleArn (string) --
The Amazon Resource Name (ARN) of the infrastructure role that Amazon ECS assumes to manage instances. This role must include permissions for Amazon EC2 instance lifecycle management, networking, and any additional Amazon Web Services services required for your workloads.
For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
instanceLaunchTemplate (dict) --
The launch template that defines how Amazon ECS launches Amazon ECS Managed Instances. This includes the instance profile for your tasks, network and storage configuration, and instance requirements that determine which Amazon EC2 instance types can be used.
For more information, see Store instance launch parameters in Amazon EC2 launch templates in the Amazon EC2 User Guide.
ec2InstanceProfileArn (string) --
The Amazon Resource Name (ARN) of the instance profile that Amazon ECS applies to Amazon ECS Managed Instances. This instance profile must include the necessary permissions for your tasks to access Amazon Web Services services and resources.
For more information, see Amazon ECS instance profile for Managed Instances in the Amazon ECS Developer Guide.
networkConfiguration (dict) --
The network configuration for Amazon ECS Managed Instances. This specifies the subnets and security groups that instances use for network connectivity.
subnets (list) --
The list of subnet IDs where Amazon ECS can launch Amazon ECS Managed Instances. Instances are distributed across the specified subnets for high availability. All subnets must be in the same VPC.
(string) --
securityGroups (list) --
The list of security group IDs to apply to Amazon ECS Managed Instances. These security groups control the network traffic allowed to and from the instances.
(string) --
storageConfiguration (dict) --
The storage configuration for Amazon ECS Managed Instances. This defines the root volume size and type for the instances.
storageSizeGiB (integer) --
The size of the tasks volume.
monitoring (string) --
CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. By default, your managed instance is configured for basic monitoring. You can optionally enable detailed monitoring to help you more quickly identify and act on operational issues. You can enable or turn off detailed monitoring at launch or when the managed instance is running or stopped. For more information, see Detailed monitoring for Amazon ECS Managed Instances in the Amazon ECS Developer Guide.
instanceRequirements (dict) --
The instance requirements. You can specify:
The instance types
Instance requirements such as vCPU count, memory, network performance, and accelerator specifications
Amazon ECS automatically selects the instances that match the specified criteria.
vCpuCount (dict) --
The minimum and maximum number of vCPUs for the instance types. Amazon ECS selects instance types that have vCPU counts within this range.
min (integer) --
The minimum number of vCPUs. Instance types with fewer vCPUs than this value are excluded from selection.
max (integer) --
The maximum number of vCPUs. Instance types with more vCPUs than this value are excluded from selection.
memoryMiB (dict) --
The minimum and maximum amount of memory in mebibytes (MiB) for the instance types. Amazon ECS selects instance types that have memory within this range.
min (integer) --
The minimum amount of memory in MiB. Instance types with less memory than this value are excluded from selection.
max (integer) --
The maximum amount of memory in MiB. Instance types with more memory than this value are excluded from selection.
cpuManufacturers (list) --
The CPU manufacturers to include or exclude. You can specify intel, amd, or amazon-web-services to control which CPU types are used for your workloads.
(string) --
memoryGiBPerVCpu (dict) --
The minimum and maximum amount of memory per vCPU in gibibytes (GiB). This helps ensure that instance types have the appropriate memory-to-CPU ratio for your workloads.
min (float) --
The minimum amount of memory per vCPU in GiB. Instance types with a lower memory-to-vCPU ratio are excluded from selection.
max (float) --
The maximum amount of memory per vCPU in GiB. Instance types with a higher memory-to-vCPU ratio are excluded from selection.
excludedInstanceTypes (list) --
The instance types to exclude from selection. Use this to prevent Amazon ECS from selecting specific instance types that may not be suitable for your workloads.
(string) --
instanceGenerations (list) --
The instance generations to include. You can specify current to use the latest generation instances, or previous to include previous generation instances for cost optimization.
(string) --
spotMaxPricePercentageOverLowestPrice (integer) --
The maximum price for Spot instances as a percentage over the lowest priced On-Demand instance. This helps control Spot instance costs while maintaining access to capacity.
onDemandMaxPricePercentageOverLowestPrice (integer) --
The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon ECS selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
bareMetal (string) --
Indicates whether to include bare metal instance types. Set to included to allow bare metal instances, excluded to exclude them, or required to use only bare metal instances.
burstablePerformance (string) --
Indicates whether to include burstable performance instance types (T2, T3, T3a, T4g). Set to included to allow burstable instances, excluded to exclude them, or required to use only burstable instances.
requireHibernateSupport (boolean) --
Indicates whether the instance types must support hibernation. When set to true, only instance types that support hibernation are selected.
networkInterfaceCount (dict) --
The minimum and maximum number of network interfaces for the instance types. This is useful for workloads that require multiple network interfaces.
min (integer) --
The minimum number of network interfaces. Instance types that support fewer network interfaces are excluded from selection.
max (integer) --
The maximum number of network interfaces. Instance types that support more network interfaces are excluded from selection.
localStorage (string) --
Indicates whether to include instance types with local storage. Set to included to allow local storage, excluded to exclude it, or required to use only instances with local storage.
localStorageTypes (list) --
The local storage types to include. You can specify hdd for hard disk drives, ssd for solid state drives, or both.
(string) --
totalLocalStorageGB (dict) --
The minimum and maximum total local storage in gigabytes (GB) for instance types with local storage.
min (float) --
The minimum total local storage in GB. Instance types with less local storage are excluded from selection.
max (float) --
The maximum total local storage in GB. Instance types with more local storage are excluded from selection.
baselineEbsBandwidthMbps (dict) --
The minimum and maximum baseline Amazon EBS bandwidth in megabits per second (Mbps). This is important for workloads with high storage I/O requirements.
min (integer) --
The minimum baseline Amazon EBS bandwidth in Mbps. Instance types with lower Amazon EBS bandwidth are excluded from selection.
max (integer) --
The maximum baseline Amazon EBS bandwidth in Mbps. Instance types with higher Amazon EBS bandwidth are excluded from selection.
acceleratorTypes (list) --
The accelerator types to include. You can specify gpu for graphics processing units, fpga for field programmable gate arrays, or inference for machine learning inference accelerators.
(string) --
acceleratorCount (dict) --
The minimum and maximum number of accelerators for the instance types. This is used when you need instances with specific numbers of GPUs or other accelerators.
min (integer) --
The minimum number of accelerators. Instance types with fewer accelerators are excluded from selection.
max (integer) --
The maximum number of accelerators. Instance types with more accelerators are excluded from selection.
acceleratorManufacturers (list) --
The accelerator manufacturers to include. You can specify nvidia, amd, amazon-web-services, or xilinx depending on your accelerator requirements.
(string) --
acceleratorNames (list) --
The specific accelerator names to include. For example, you can specify a100, v100, k80, or other specific accelerator models.
(string) --
acceleratorTotalMemoryMiB (dict) --
The minimum and maximum total accelerator memory in mebibytes (MiB). This is important for GPU workloads that require specific amounts of video memory.
min (integer) --
The minimum total accelerator memory in MiB. Instance types with less accelerator memory are excluded from selection.
max (integer) --
The maximum total accelerator memory in MiB. Instance types with more accelerator memory are excluded from selection.
networkBandwidthGbps (dict) --
The minimum and maximum network bandwidth in gigabits per second (Gbps). This is crucial for network-intensive workloads that require high throughput.
min (float) --
The minimum network bandwidth in Gbps. Instance types with lower network bandwidth are excluded from selection.
max (float) --
The maximum network bandwidth in Gbps. Instance types with higher network bandwidth are excluded from selection.
allowedInstanceTypes (list) --
The instance types to include in the selection. When specified, Amazon ECS only considers these instance types, subject to the other requirements specified.
(string) --
maxSpotPriceAsPercentageOfOptimalOnDemandPrice (integer) --
The maximum price for Spot instances as a percentage of the optimal On-Demand price. This provides more precise cost control for Spot instance selection.
propagateTags (string) --
Determines whether tags from the capacity provider are automatically applied to Amazon ECS Managed Instances. This helps with cost allocation and resource management by ensuring consistent tagging across your infrastructure.
updateStatus (string) --
The update status of the capacity provider. The following are the possible states that is returned.
DELETE_IN_PROGRESS
The capacity provider is in the process of being deleted.
DELETE_COMPLETE
The capacity provider was successfully deleted and has an INACTIVE status.
DELETE_FAILED
The capacity provider can't be deleted. The update status reason provides further details about why the delete failed.
updateStatusReason (string) --
The update status reason. This provides further details about the update status for the capacity provider.
tags (list) --
The metadata that you apply to the capacity provider to help you categorize and organize it. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
type (string) --
The type of capacity provider. For Amazon ECS Managed Instances, this value is MANAGED_INSTANCES, indicating that Amazon ECS manages the underlying Amazon EC2 instances on your behalf.
{'service': {'deployments': {'launchType': {'MANAGED_INSTANCES'}}, 'launchType': {'MANAGED_INSTANCES'}, 'taskSets': {'launchType': {'MANAGED_INSTANCES'}}}}
Modifies the parameters of a service.
For services using the rolling update ( ECS) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. You can update your volume configurations and trigger a new deployment. volumeConfigurations is only supported for REPLICA service and not DAEMON service. If you leave volumeConfigurations null, it doesn't trigger a new deployment. For more information on volumes, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
For services using the blue/green ( CODE_DEPLOY) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more information, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
If you have updated the container image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.
If minimumHealthyPercent is below 100%, the scheduler can ignore desiredCount temporarily during a deployment. For example, if desiredCount is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer.
The maximumPercent parameter represents an upper limit on the number of running tasks during a deployment. You can use it to define the deployment batch size. For example, if desiredCount is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout. After this, SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.
Determine which of the container instances in your cluster can support your service's task definition. For example, they have the required CPU, memory, ports, and container instance attributes.
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner even though you can choose a different placement strategy.
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
See also: AWS API Documentation
Request Syntax
client.update_service( cluster='string', service='string', desiredCount=123, taskDefinition='string', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, availabilityZoneRebalancing='ENABLED'|'DISABLED', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], platformVersion='string', forceNewDeployment=True|False, healthCheckGracePeriodSeconds=123, deploymentController={ 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, enableExecuteCommand=True|False, enableECSManagedTags=True|False, loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed.
You can't change the cluster name.
string
[REQUIRED]
The name of the service to update.
integer
The number of instantiations of the task to place and keep running in your service.
This parameter doesn't trigger a new service deployment.
string
The family and revision ( family:revision) or full ARN of the task definition to run in your service. If a revision is not specified, the latest ACTIVE revision is used. If you modify the task definition with UpdateService, Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.
This parameter triggers a new service deployment.
list
The details of a capacity provider strategy. You can set a capacity provider when you create a cluster, run a task, or update a service.
When you use Fargate, the capacity providers are FARGATE or FARGATE_SPOT.
When you use Amazon EC2, the capacity providers are Auto Scaling groups.
You can change capacity providers for rolling deployments and blue/green deployments.
The following list provides the valid transitions:
Update the Fargate launch type to an Auto Scaling group capacity provider.
Update the Amazon EC2 launch type to a Fargate capacity provider.
Update the Fargate capacity provider to an Auto Scaling group capacity provider.
Update the Amazon EC2 capacity provider to a Fargate capacity provider.
Update the Auto Scaling group or Fargate capacity provider back to the launch type. Pass an empty list in the capacityProviderStrategy parameter.
For information about Amazon Web Services CDK considerations, see Amazon Web Services CDK considerations.
This parameter doesn't trigger a new service deployment.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
dict
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
This parameter doesn't trigger a new service deployment.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) -- [REQUIRED]
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) -- [REQUIRED]
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) -- [REQUIRED]
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
string
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
This parameter doesn't trigger a new service deployment.
dict
An object representing the network configuration for the service.
This parameter triggers a new service deployment.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
list
An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array.
You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
This parameter doesn't trigger a new service deployment.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object.
You can specify a maximum of five strategy rules for each service.
This parameter doesn't trigger a new service deployment.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
string
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
This parameter triggers a new service deployment.
boolean
Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination ( my_image:latest) or to roll Fargate tasks onto a newer platform version.
integer
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of 0 is used. If you don't use any of the health checks, then healthCheckGracePeriodSeconds is unused.
If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
This parameter doesn't trigger a new service deployment.
dict
The deployment controller to use for the service.
type (string) -- [REQUIRED]
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
boolean
If true, this enables execute command functionality on all task containers.
If you do not want to override the value that was set when the service was created, you can set this to null when performing this action.
This parameter doesn't trigger a new service deployment.
boolean
Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.
This parameter doesn't trigger a new service deployment.
list
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group.
For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using CreateDeployment through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
You can remove existing loadBalancers by passing an empty list.
This parameter triggers a new service deployment.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
string
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.
This parameter doesn't trigger a new service deployment.
list
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running.
You can remove existing serviceRegistries by passing an empty list.
This parameter triggers a new service deployment.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
dict
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
This parameter triggers a new service deployment.
enabled (boolean) -- [REQUIRED]
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) -- [REQUIRED]
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) -- [REQUIRED]
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) -- [REQUIRED]
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) -- [REQUIRED]
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) -- [REQUIRED]
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) -- [REQUIRED]
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) -- [REQUIRED]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
list
The details of the volume that was configuredAtLaunch. You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition. If set to null, no new deployment is triggered. Otherwise, if this configuration differs from the existing one, it triggers a new deployment.
This parameter triggers a new service deployment.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
list
An object representing the VPC Lattice configuration for the service being updated.
This parameter triggers a new service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) -- [REQUIRED]
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) -- [REQUIRED]
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False }, 'strategy': 'ROLLING'|'BLUE_GREEN', 'bakeTimeInMinutes': 123, 'lifecycleHooks': [ { 'hookTargetArn': 'string', 'roleArn': 'string', 'lifecycleStages': [ 'RECONCILE_SERVICE'|'PRE_SCALE_UP'|'POST_SCALE_UP'|'TEST_TRAFFIC_SHIFT'|'POST_TEST_TRAFFIC_SHIFT'|'PRODUCTION_TRAFFIC_SHIFT'|'POST_PRODUCTION_TRAFFIC_SHIFT', ], 'hookDetails': {...}|[...]|123|123.4|'string'|True|None }, ] }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string', 'testTrafficRules': { 'header': { 'name': 'string', 'value': { 'exact': 'string' } } } }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'volumeInitializationRate': 123, 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False, 'availabilityZoneRebalancing': 'ENABLED'|'DISABLED' } }Response Structure
(dict) --
service (dict) --
The full description of your service following the update call.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
The Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the service uses either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
If any tasks are unhealthy and if maximumPercent doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one — using the minimumHealthyPercent as a constraint — to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see Amazon ECS services .
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
strategy (string) --
The deployment strategy for the service. Choose from these valid values:
ROLLING - When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.
BLUE_GREEN - A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed.
bakeTimeInMinutes (integer) --
The time period when both blue and green service revisions are running simultaneously after the production traffic has shifted.
You must provide this parameter when you use the BLUE_GREEN deployment strategy.
lifecycleHooks (list) --
An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.
(dict) --
A deployment lifecycle hook runs custom logic at specific stages of the deployment process. Currently, you can use Lambda functions as hook targets.
For more information, see Lifecycle hooks for Amazon ECS service deployments in the Amazon Elastic Container Service Developer Guide.
hookTargetArn (string) --
The Amazon Resource Name (ARN) of the hook target. Currently, only Lambda function ARNs are supported.
You must provide this parameter when configuring a deployment lifecycle hook.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call Lambda functions on your behalf.
For more information, see Permissions required for Lambda functions in Amazon ECS blue/green deployments in the Amazon Elastic Container Service Developer Guide.
lifecycleStages (list) --
The lifecycle stages at which to run the hook. Choose from these valid values:
RECONCILE_SERVICE The reconciliation stage that only happens when you start a new service deployment with more than 1 service revision in an ACTIVE state. You can use a lifecycle hook for this stage.
PRE_SCALE_UP The green service revision has not started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
POST_SCALE_UP The green service revision has started. The blue service revision is handling 100% of the production traffic. There is no test traffic. You can use a lifecycle hook for this stage.
TEST_TRAFFIC_SHIFT The blue and green service revisions are running. The blue service revision handles 100% of the production traffic. The green service revision is migrating from 0% to 100% of test traffic. You can use a lifecycle hook for this stage.
POST_TEST_TRAFFIC_SHIFT The test traffic shift is complete. The green service revision handles 100% of the test traffic. You can use a lifecycle hook for this stage.
PRODUCTION_TRAFFIC_SHIFT Production traffic is shifting to the green service revision. The green service revision is migrating from 0% to 100% of production traffic. You can use a lifecycle hook for this stage.
POST_PRODUCTION_TRAFFIC_SHIFT The production traffic shift is complete. You can use a lifecycle hook for this stage.
You must provide this parameter when configuring a deployment lifecycle hook.
(string) --
hookDetails (:ref:`document<document>`) --
The details of the deployment lifecycle hook. This provides additional configuration for how the hook should be executed during deployment operations on Amazon ECS Managed Instances.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
testTrafficRules (dict) --
The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic.
header (dict) --
The HTTP header-based routing rules that determine which requests should be routed to the new service version during blue/green deployment testing. These rules provide fine-grained control over test traffic routing based on request headers.
name (string) --
The name of the HTTP header to examine for test traffic routing. Common examples include custom headers like X-Test-Version or X-Canary-Request that can be used to identify test traffic.
value (dict) --
The header value matching configuration that determines how the HTTP header value is evaluated for test traffic routing decisions.
exact (string) --
The exact value that the HTTP header must match for the test traffic routing rule to apply. This provides precise control over which requests are routed to the new service revision during blue/green deployments.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using Fargate.Optional when using EC2.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
The following options apply to all supported log drivers.
mode
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.
If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
You can set a default mode for all containers in a specific Amazon Web Services Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide.
Required: No
Default value: 10m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls. For more information, see Under the hood: FireLens for Amazon ECS Tasks.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the service in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If you turn on Region-level Amazon EBS encryption by default but set this value as false, the setting is overridden and the volume is encrypted with the KMS key specified for Amazon EBS encryption by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When a key is specified using this parameter, it overrides Amazon EBS default encryption or any KMS key that you specified for cluster-level managed storage encryption. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information about encrypting Amazon EBS volumes attached to tasks, see Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create volumes for attachment to tasks maintained by the service. You must specify either snapshotId or sizeInGiB in your volume configuration. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeInitializationRate (integer) --
The rate, in MiB/s, at which data is fetched from a snapshot of an existing EBS volume to create new volumes for attachment to the tasks maintained by the service. This property can be specified only if you specify a snapshotId. For more information, see Initialize Amazon EBS volumes in the Amazon EBS User Guide.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the tasks will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
The deployment controller is the mechanism that determines how tasks are deployed for your service. The valid options are:
ECS When you create a service which uses the ECS deployment controller, you can choose between the following deployment strategies:
ROLLING: When you create a service which uses the rolling update ( ROLLING) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration. Rolling update deployments are best suited for the following scenarios:
Gradual service updates: You need to update your service incrementally without taking the entire service offline at once.
Limited resource requirements: You want to avoid the additional resource costs of running two complete environments simultaneously (as required by blue/green deployments).
Acceptable deployment time: Your application can tolerate a longer deployment process, as rolling updates replace tasks one by one.
No need for instant roll back: Your service can tolerate a rollback process that takes minutes rather than seconds.
Simple deployment process: You prefer a straightforward deployment approach without the complexity of managing multiple environments, target groups, and listeners.
No load balancer requirement: Your service doesn't use or require a load balancer, Application Load Balancer, Network Load Balancer, or Service Connect (which are required for blue/green deployments).
Stateful applications: Your application maintains state that makes it difficult to run two parallel environments.
Cost sensitivity: You want to minimize deployment costs by not running duplicate environments during deployment.
Rolling updates are the default deployment strategy for services and provide a balance between deployment safety and resource efficiency for many common application scenarios.
BLUE_GREEN: A blue/green deployment strategy ( BLUE_GREEN) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed. Amazon ECS blue/green deployments are best suited for the following scenarios:
Service validation: When you need to validate new service revisions before directing production traffic to them
Zero downtime: When your service requires zero-downtime deployments
Instant roll back: When you need the ability to quickly roll back if issues are detected
Load balancer requirement: When your service uses Application Load Balancer, Network Load Balancer, or Service Connect
External Use a third-party deployment controller.
Blue/green deployment (powered by CodeDeploy) CodeDeploy installs an updated version of the application as a new replacement task set and reroutes production traffic from the original application task set to the replacement task set. The original task set is terminated after a successful deployment. Use this deployment controller to verify a new deployment of a service before sending production traffic to it.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
availabilityZoneRebalancing (string) --
Indicates whether to use Availability Zone rebalancing for the service.
For more information, see Balancing an Amazon ECS service across Availability Zones in the Amazon Elastic Container Service Developer Guide .
{'taskSet': {'launchType': {'MANAGED_INSTANCES'}}}
Modifies which task set in a service is the primary task set. Any parameters that are updated on the primary task set in a service will transition to the service. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.update_service_primary_task_set( cluster='string', service='string', primaryTaskSet='string' )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set exists in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service that the task set exists in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the task set to set as the primary task set in the deployment.
dict
Response Syntax
{ 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } }
Response Structure
(dict) --
taskSet (dict) --
The details about the task set.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
{'taskSet': {'launchType': {'MANAGED_INSTANCES'}}}
Modifies a task set. This is used when a service uses the EXTERNAL deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
See also: AWS API Documentation
Request Syntax
client.update_task_set( cluster='string', service='string', taskSet='string', scale={ 'value': 123.0, 'unit': 'PERCENT' } )
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set is found in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the service that the task set is found in.
string
[REQUIRED]
The short name or full Amazon Resource Name (ARN) of the task set to update.
dict
[REQUIRED]
A floating-point percentage of the desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
dict
Response Syntax
{ 'taskSet': { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL'|'MANAGED_INSTANCES', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123, 'advancedConfiguration': { 'alternateTargetGroupArn': 'string', 'productionListenerRule': 'string', 'testListenerRule': 'string', 'roleArn': 'string' } }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } } }
Response Structure
(dict) --
taskSet (dict) --
Details about the task set.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy can contain a maximum of 20 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
Weight value characteristics:
Weight is considered after the base value is satisfied
Default value is 0 if not specified
Valid range: 0 to 1,000
At least one capacity provider must have a weight greater than zero
Capacity providers with weight of 0 cannot place tasks
Task distribution logic:
Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider
Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios
Examples:
Equal Distribution: Two capacity providers both with weight 1 will split tasks evenly after base requirements are met.
Weighted Distribution: If capacityProviderA has weight 1 and capacityProviderB has weight 4, then for every 1 task on A, 4 tasks will run on B.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
Base value characteristics:
Only one capacity provider in a strategy can have a base defined
Default value is 0 if not specified
Valid range: 0 to 100,000
Base requirements are satisfied first before weight distribution
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address.
Consider the following when you set this value:
When you use create-service or update-service, the default is DISABLED.
When the service deploymentController is ECS, the value must be DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
advancedConfiguration (dict) --
The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.
alternateTargetGroupArn (string) --
The Amazon Resource Name (ARN) of the alternate target group for Amazon ECS blue/green deployments.
productionListenerRule (string) --
The Amazon Resource Name (ARN) that that identifies the production listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing production traffic.
testListenerRule (string) --
The Amazon Resource Name (ARN) that identifies ) that identifies the test listener rule (in the case of an Application Load Balancer) or listener (in the case for an Network Load Balancer) for routing test traffic.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that grants Amazon ECS permission to call the Elastic Load Balancing APIs for you.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.