2024/11/18 - Amazon EC2 Container Service - 5 updated api methods
Changes This release adds support for adding VPC Lattice configurations in ECS CreateService/UpdateService APIs. The configuration allows for associating VPC Lattice target groups with ECS Services.
{'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}Response
{'service': {'deployments': {'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}}}
Runs and maintains your desired number of tasks from a specified task definition. If the number of tasks running in a service drops below the desiredCount, Amazon ECS runs another copy of the task in the specified cluster. To update an existing service, use UpdateService.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind one or more load balancers. The load balancers distribute traffic across the tasks that are associated with the service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when creating or updating a service. volumeConfigurations is only supported for REPLICA service and not DAEMON service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer.
There are two service scheduler strategies available:
REPLICA - The replica scheduling strategy places and maintains your desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
DAEMON - The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It also stops tasks that don't meet the placement constraints. When using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies. For more information, see Service scheduler concepts in the Amazon Elastic Container Service Developer Guide.
You can optionally specify a deployment configuration for your service. The deployment is initiated by changing properties. For example, the deployment might be initiated by the task definition or by your desired count of a service. You can use UpdateService. The default value for a replica service for minimumHealthyPercent is 100%. The default value for a daemon service for minimumHealthyPercent is 0%.
If a service uses the ECS deployment controller, the minimum healthy percent represents a lower limit on the number of tasks in a service that must remain in the RUNNING state during a deployment. Specifically, it represents it as a percentage of your desired number of tasks (rounded up to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can deploy without using additional cluster capacity. For example, if you set your service to have desired number of four tasks and a minimum healthy percent of 50%, the scheduler might stop two existing tasks to free up cluster capacity before starting two new tasks. If they're in the RUNNING state, tasks for services that don't use a load balancer are considered healthy . If they're in the RUNNING state and reported as healthy by the load balancer, tasks for services that do use a load balancer are considered healthy . The default value for minimum healthy percent is 100%.
If a service uses the ECS deployment controller, the maximum percent parameter represents an upper limit on the number of tasks in a service that are allowed in the RUNNING or PENDING state during a deployment. Specifically, it represents it as a percentage of the desired number of tasks (rounded down to the nearest integer). This happens when any of your container instances are in the DRAINING state if the service contains tasks using the EC2 launch type. Using this parameter, you can define the deployment batch size. For example, if your service has a desired number of four tasks and a maximum percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default value for maximum percent is 200%.
If a service uses either the CODE_DEPLOY or EXTERNAL deployment controller types and tasks that use the EC2 launch type, the minimum healthy percent and maximum percent values are used only to define the lower and upper limit on the number of the tasks in the service that remain in the RUNNING state. This is while the container instances are in the DRAINING state. If the tasks in the service use the Fargate launch type, the minimum healthy percent and maximum percent values aren't used. This is the case even if they're currently visible when describing your service.
When creating a service that uses the EXTERNAL deployment controller, you can specify only parameters that aren't controlled at the task set level. The only required parameter is the service name. You control your services using the CreateTaskSet. For more information, see Amazon ECS deployment types in the Amazon Elastic Container Service Developer Guide.
When the service scheduler launches new tasks, it determines task placement. For information about task placement and task placement strategies, see Amazon ECS task placement in the Amazon Elastic Container Service Developer Guide
See also: AWS API Documentation
Request Syntax
client.create_service( cluster='string', serviceName='string', taskDefinition='string', loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], desiredCount=123, clientToken='string', launchType='EC2'|'FARGATE'|'EXTERNAL', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], platformVersion='string', role='string', deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, healthCheckGracePeriodSeconds=123, schedulingStrategy='REPLICA'|'DAEMON', deploymentController={ 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, tags=[ { 'key': 'string', 'value': 'string' }, ], enableECSManagedTags=True|False, propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', enableExecuteCommand=True|False, serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster, but you can have similarly named services in multiple clusters within a Region or across multiple Regions.
string
The family and revision ( family:revision) or full ARN of the task definition to run in your service. If a revision isn't specified, the latest ACTIVE revision is used.
A task definition must be specified if the service uses either the ECS or CODE_DEPLOY deployment controllers.
For more information about deployment types, see Amazon ECS deployment types.
list
A load balancer object representing the load balancers to use with your service. For more information, see Service load balancing in the Amazon Elastic Container Service Developer Guide.
If the service uses the rolling update ( ECS) deployment controller and using either an Application Load Balancer or Network Load Balancer, you must specify one or more target group ARNs to attach to the service. The service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles for Amazon ECS in the Amazon Elastic Container Service Developer Guide.
If the service uses the CODE_DEPLOY deployment controller, the service is required to use either an Application Load Balancer or Network Load Balancer. When creating an CodeDeploy deployment group, you specify two target groups (referred to as a targetGroupPair). During a deployment, CodeDeploy determines which task set in your service has the status PRIMARY, and it associates one target group with it. Then, it also associates the other target group with the replacement task set. The load balancer can also have up to two listeners: a required listener for production traffic and an optional listener that you can use to perform validation tests with Lambda functions before routing production traffic to it.
If you use the CODE_DEPLOY deployment controller, these values can be changed when updating the service.
For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name, and the container port to access from the load balancer. The container name must be as it appears in a container definition. The load balancer name parameter must be omitted. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group that's specified here.
For Classic Load Balancers, this object must contain the load balancer name, the container name , and the container port to access from the load balancer. The container name must be as it appears in a container definition. The target group ARN parameter must be omitted. When a task from this service is placed on a container instance, the container instance is registered with the load balancer that's specified here.
Services with tasks that use the awsvpc network mode (for example, those with the Fargate launch type) only support Application Load Balancers and Network Load Balancers. Classic Load Balancers aren't supported. Also, when you create any target groups for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
list
The details of the service discovery registry to associate with this service. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
integer
The number of instantiations of the specified task definition to place and keep running in your service.
This is required if schedulingStrategy is REPLICA or isn't specified. If schedulingStrategy is DAEMON then this isn't required.
string
An identifier that you provide to ensure the idempotency of the request. It must be unique and is case sensitive. Up to 36 ASCII characters in the range of 33-126 (inclusive) are allowed.
string
The infrastructure that you run your service on. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
The FARGATE launch type runs your tasks on Fargate On-Demand infrastructure.
The EC2 launch type runs your tasks on Amazon EC2 instances registered to your cluster.
The EXTERNAL launch type runs your tasks on your on-premises server or virtual machine (VM) capacity registered to your cluster.
A service can use either a launch type or a capacity provider strategy. If a launchType is specified, the capacityProviderStrategy parameter must be omitted.
list
The capacity provider strategy to use for the service.
If a capacityProviderStrategy is specified, the launchType parameter must be omitted. If no capacityProviderStrategy or launchType is specified, the defaultCapacityProviderStrategy for the cluster is used.
A capacity provider strategy may contain a maximum of 6 capacity providers.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
string
The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
string
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition doesn't use the awsvpc network mode. If you specify the role parameter, you must also specify a load balancer object with the loadBalancers parameter.
If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/ then you would specify /foo/bar as the role name. For more information, see Friendly names and paths in the IAM User Guide.
dict
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) -- [REQUIRED]
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) -- [REQUIRED]
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) -- [REQUIRED]
Determines whether to use the CloudWatch alarm option in the service deployment process.
list
An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
dict
The network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own elastic network interface, and it isn't supported for other network modes. For more information, see Task networking in the Amazon Elastic Container Service Developer Guide.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
integer
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of 0 is used. If you don't use any of the health checks, then healthCheckGracePeriodSeconds is unused.
If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
string
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available:
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions. This scheduler strategy is required if the service uses the CODE_DEPLOY or EXTERNAL deployment controller types.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks and will stop tasks that don't meet the placement constraints. When you're using this strategy, you don't need to specify a desired number of tasks, a task placement strategy, or use Service Auto Scaling policies.
dict
The deployment controller to use for the service. If no deployment controller is specified, the default value of ECS is used.
type (string) -- [REQUIRED]
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
list
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
boolean
Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the propagateTags request parameter.
string
Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than NONE when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide.
The default is NONE.
boolean
Determines whether the execute command functionality is turned on for the service. If true, this enables execute command functionality on all containers in the service tasks.
dict
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) -- [REQUIRED]
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) -- [REQUIRED]
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) -- [REQUIRED]
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) -- [REQUIRED]
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) -- [REQUIRED]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
list
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
list
The VPC Lattice configuration for the service being created.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) -- [REQUIRED]
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) -- [REQUIRED]
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False } }Response Structure
(dict) --
service (dict) --
The full description of your service following the create call.
A service will return either a capacityProviderStrategy or launchType parameter, but not both, depending where one was specified when it was created.
If a service is using the ECS deployment controller, the deploymentController and taskSets parameters will not be returned.
if the service uses the CODE_DEPLOY deployment controller, the deploymentController, taskSets and deployments parameters will be returned, however the deployments parameter will be an empty list.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
{'service': {'deployments': {'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}}}
Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you can't delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
See also: AWS API Documentation
Request Syntax
client.delete_service( cluster='string', service='string', force=True|False )
string
The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service to delete. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
The name of the service to delete.
boolean
If true, allows you to delete a service even if it wasn't scaled down to zero tasks. It's only necessary to use this if the service uses the REPLICA scheduling strategy.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False } }Response Structure
(dict) --
service (dict) --
The full description of the deleted service.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
{'serviceRevisions': {'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}}
Describes one or more service revisions.
A service revision is a version of the service that includes the values for the Amazon ECS resources (for example, task definition) and the environment resources (for example, load balancers, subnets, and security groups). For more information, see Amazon ECS service revisions.
You can't describe a service revision that was created before October 25, 2024.
See also: AWS API Documentation
Request Syntax
client.describe_service_revisions( serviceRevisionArns=[ 'string', ] )
list
[REQUIRED]
The ARN of the service revision.
You can specify a maximum of 20 ARNs.
You can call ListServiceDeployments to get the ARNs.
(string) --
dict
Response Syntax
{ 'serviceRevisions': [ { 'serviceRevisionArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'taskDefinition': 'string', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'containerImages': [ { 'containerName': 'string', 'imageDigest': 'string', 'image': 'string' }, ], 'guardDutyEnabled': True|False, 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'createdAt': datetime(2015, 1, 1), 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
serviceRevisions (list) --
The list of service revisions described.
(dict) --
Information about the service revision.
A service revision contains a record of the workload configuration Amazon ECS is attempting to deploy. Whenever you create or deploy a service, Amazon ECS automatically creates and captures the configuration that you're trying to deploy in the service revision. For information about service revisions, see Amazon ECS service revisions in the Amazon Elastic Container Service Developer Guide .
serviceRevisionArn (string) --
The ARN of the service revision.
serviceArn (string) --
The ARN of the service for the service revision.
clusterArn (string) --
The ARN of the cluster that hosts the service.
taskDefinition (string) --
The task definition the service revision uses.
capacityProviderStrategy (list) --
The capacity provider strategy the service revision uses.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
launchType (string) --
The launch type the service revision uses.
platformVersion (string) --
For the Fargate launch type, the platform version the service revision uses.
platformFamily (string) --
The platform family the service revision uses.
loadBalancers (list) --
The load balancers the service revision uses.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The service registries (for Service Discovery) the service revision uses.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
networkConfiguration (dict) --
The network configuration for a task or service.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
containerImages (list) --
The container images the service revision uses.
(dict) --
The details about the container image a service revision uses.
To ensure that all tasks in a service use the same container image, Amazon ECS resolves container image names and any image tags specified in the task definition to container image digests.
After the container image digest has been established, Amazon ECS uses the digest to start any other desired tasks, and for any future service and service revision updates. This leads to all tasks in a service always running identical container images, resulting in version consistency for your software. For more information, see Container image resolution in the Amazon ECS Developer Guide.
containerName (string) --
The name of the container.
imageDigest (string) --
The container image digest.
image (string) --
The container image.
guardDutyEnabled (boolean) --
Indicates whether Runtime Monitoring is turned on.
serviceConnectConfiguration (dict) --
The Service Connect configuration of your Amazon ECS service. The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
volumeConfigurations (list) --
The volumes that are configured at deployment that the service revision uses.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The amount of ephemeral storage to allocate for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
createdAt (datetime) --
The time that the service revision was created. The format is yyyy-mm-dd HH:mm:ss.SSSSS.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service revision.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'services': {'deployments': {'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}}}
Describes the specified services running in your cluster.
See also: AWS API Documentation
Request Syntax
client.describe_services( cluster='string', services=[ 'string', ], include=[ 'TAGS', ] )
string
The short name or full Amazon Resource Name (ARN)the cluster that hosts the service to describe. If you do not specify a cluster, the default cluster is assumed. This parameter is required if the service or services you are describing were launched in any cluster other than the default cluster.
list
[REQUIRED]
A list of services to describe. You may specify up to 10 services to describe in a single operation.
(string) --
list
Determines whether you want to see the resource tags for the service. If TAGS is specified, the tags are included in the response. If this field is omitted, tags aren't included in the response.
(string) --
dict
Response Syntax
{ 'services': [ { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False }, ], 'failures': [ { 'arn': 'string', 'reason': 'string', 'detail': 'string' }, ] }Response Structure
(dict) --
services (list) --
The list of services described.
(dict) --
Details on a service within a cluster.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.
failures (list) --
Any failures associated with the call.
(dict) --
A failed resource. For a list of common causes, see API failure reasons in the Amazon Elastic Container Service Developer Guide.
arn (string) --
The Amazon Resource Name (ARN) of the failed resource.
reason (string) --
The reason for the failure.
detail (string) --
The details of the failure.
{'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}Response
{'service': {'deployments': {'vpcLatticeConfigurations': [{'portName': 'string', 'roleArn': 'string', 'targetGroupArn': 'string'}]}}}
Modifies the parameters of a service.
For services using the rolling update ( ECS) you can update the desired count, deployment configuration, network configuration, load balancers, service registries, enable ECS managed tags option, propagate tags option, task placement constraints and strategies, and task definition. When you update any of these parameters, Amazon ECS starts new tasks with the new configuration.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide. You can update your volume configurations and trigger a new deployment. volumeConfigurations is only supported for REPLICA service and not DAEMON service. If you leave volumeConfigurations null, it doesn't trigger a new deployment. For more infomation on volumes, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
For services using the blue/green ( CODE_DEPLOY) deployment controller, only the desired count, deployment configuration, health check grace period, task placement constraints and strategies, enable ECS managed tags option, and propagate tags can be updated using this API. If the network configuration, platform version, task definition, or load balancer need to be updated, create a new CodeDeploy deployment. For more information, see CreateDeployment in the CodeDeploy API Reference.
For services using an external deployment controller, you can update only the desired count, task placement constraints and strategies, health check grace period, enable ECS managed tags option, and propagate tags option, using this API. If the launch type, load balancer, network configuration, platform version, or task definition need to be updated, create a new task set For more information, see CreateTaskSet.
You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.
You can attach Amazon EBS volumes to Amazon ECS tasks by configuring the volume when starting or running a task, or when creating or updating a service. For more infomation, see Amazon EBS volumes in the Amazon Elastic Container Service Developer Guide.
If you have updated the container image of your application, you can create a new task definition with that image and deploy it to your service. The service scheduler uses the minimum healthy percent and maximum percent parameters (in the service's deployment configuration) to determine the deployment strategy.
You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.
If minimumHealthyPercent is below 100%, the scheduler can ignore desiredCount temporarily during a deployment. For example, if desiredCount is four tasks, a minimum of 50% allows the scheduler to stop two existing tasks before starting two new tasks. Tasks for services that don't use a load balancer are considered healthy if they're in the RUNNING state. Tasks for services that use a load balancer are considered healthy if they're in the RUNNING state and are reported as healthy by the load balancer.
The maximumPercent parameter represents an upper limit on the number of running tasks during a deployment. You can use it to define the deployment batch size. For example, if desiredCount is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout. After this, SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.
When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic.
Determine which of the container instances in your cluster can support your service's task definition. For example, they have the required CPU, memory, ports, and container instance attributes.
By default, the service scheduler attempts to balance tasks across Availability Zones in this manner even though you can choose a different placement strategy.
Sort the valid container instances by the fewest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have zero, valid container instances in either zone B or C are considered optimal for placement.
Place the new service task on a valid container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the fewest number of running tasks for this service.
When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:
Sort the container instances by the largest number of running tasks for this service in the same Availability Zone as the instance. For example, if zone A has one running service task and zones B and C each have two, container instances in either zone B or C are considered optimal for termination.
Stop the task on a container instance in an optimal Availability Zone (based on the previous steps), favoring container instances with the largest number of running tasks for this service.
See also: AWS API Documentation
Request Syntax
client.update_service( cluster='string', service='string', desiredCount=123, taskDefinition='string', capacityProviderStrategy=[ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], deploymentConfiguration={ 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, placementConstraints=[ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], placementStrategy=[ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], platformVersion='string', forceNewDeployment=True|False, healthCheckGracePeriodSeconds=123, enableExecuteCommand=True|False, enableECSManagedTags=True|False, loadBalancers=[ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], propagateTags='TASK_DEFINITION'|'SERVICE'|'NONE', serviceRegistries=[ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], serviceConnectConfiguration={ 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, volumeConfigurations=[ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], vpcLatticeConfigurations=[ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] )
string
The short name or full Amazon Resource Name (ARN) of the cluster that your service runs on. If you do not specify a cluster, the default cluster is assumed.
string
[REQUIRED]
The name of the service to update.
integer
The number of instantiations of the task to place and keep running in your service.
string
The family and revision ( family:revision) or full ARN of the task definition to run in your service. If a revision is not specified, the latest ACTIVE revision is used. If you modify the task definition with UpdateService, Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.
list
The capacity provider strategy to update the service to use.
if the service uses the default capacity provider strategy for the cluster, the service can be updated to use one or more capacity providers as opposed to the default capacity provider strategy. However, when a service is using a capacity provider strategy that's not the default capacity provider strategy, the service can't be updated to use the cluster's default capacity provider strategy.
A capacity provider strategy consists of one or more capacity providers along with the base and weight to assign to them. A capacity provider must be associated with the cluster to be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster. Only capacity providers with an ACTIVE or UPDATING status can be used.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The `PutClusterCapacityProviders <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PutClusterCapacityProviders.html>`__API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) -- [REQUIRED]
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
dict
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) -- [REQUIRED]
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) -- [REQUIRED]
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) -- [REQUIRED]
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) -- [REQUIRED]
Determines whether to use the CloudWatch alarm option in the service deployment process.
dict
An object representing the network configuration for the service.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) -- [REQUIRED]
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
list
An array of task placement constraint objects to update the service to use. If no value is specified, the existing placement constraints for the service will remain unchanged. If this value is specified, it will override any existing placement constraints defined for the service. To remove all existing placement constraints, specify an empty array.
You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
list
The task placement strategy objects to update the service to use. If no value is specified, the existing placement strategy for the service will remain unchanged. If this value is specified, it will override the existing placement strategy defined for the service. To remove an existing placement strategy, specify an empty object.
You can specify a maximum of five strategy rules for each service.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
string
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If a platform version is not specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
boolean
Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination ( my_image:latest) or to roll Fargate tasks onto a newer platform version.
integer
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing, VPC Lattice, and container health checks after a task has first started. If you don't specify a health check grace period value, the default value of 0 is used. If you don't use any of the health checks, then healthCheckGracePeriodSeconds is unused.
If your service's tasks take a while to start and respond to health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up.
boolean
If true, this enables execute command functionality on all task containers.
If you do not want to override the value that was set when the service was created, you can set this to null when performing this action.
boolean
Determines whether to turn on Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.
list
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
When you add, update, or remove a load balancer configuration, Amazon ECS starts new tasks with the updated Elastic Load Balancing configuration, and then stops the old tasks when the new tasks are running.
For services that use rolling updates, you can add, update, or remove Elastic Load Balancing target groups. You can update from a single target group to multiple target groups and from multiple target groups to a single target group.
For services that use blue/green deployments, you can update Elastic Load Balancing target groups by using CreateDeployment through CodeDeploy. Note that multiple target groups are not supported for blue/green deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services that use the external deployment controller, you can add, update, or remove load balancers by using CreateTaskSet. Note that multiple target groups are not supported for external deployments. For more information see Register multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
You can remove existing loadBalancers by passing an empty list.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
string
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
Only tasks launched after the update will reflect the update. To update the tags on all tasks, set forceNewDeployment to true, so that Amazon ECS starts new tasks with the updated tags.
list
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
When you add, update, or remove the service registries configuration, Amazon ECS starts new tasks with the updated service registries configuration, and then stops the old tasks when the new tasks are running.
You can remove existing serviceRegistries by passing an empty list.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
dict
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) -- [REQUIRED]
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) -- [REQUIRED]
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) -- [REQUIRED]
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) -- [REQUIRED]
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) -- [REQUIRED]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) -- [REQUIRED]
The name of the secret.
valueFrom (string) -- [REQUIRED]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
list
The details of the volume that was configuredAtLaunch. You can configure the size, volumeType, IOPS, throughput, snapshot and encryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition. If set to null, no new deployment is triggered. Otherwise, if this configuration differs from the existing one, it triggers a new deployment.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) -- [REQUIRED]
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) -- [REQUIRED]
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
list
An object representing the VPC Lattice configuration for the service being updated.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) -- [REQUIRED]
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) -- [REQUIRED]
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) -- [REQUIRED]
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
dict
Response Syntax
{ 'service': { 'serviceArn': 'string', 'serviceName': 'string', 'clusterArn': 'string', 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'status': 'string', 'desiredCount': 123, 'runningCount': 123, 'pendingCount': 123, 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'taskDefinition': 'string', 'deploymentConfiguration': { 'deploymentCircuitBreaker': { 'enable': True|False, 'rollback': True|False }, 'maximumPercent': 123, 'minimumHealthyPercent': 123, 'alarms': { 'alarmNames': [ 'string', ], 'rollback': True|False, 'enable': True|False } }, 'taskSets': [ { 'id': 'string', 'taskSetArn': 'string', 'serviceArn': 'string', 'clusterArn': 'string', 'startedBy': 'string', 'externalId': 'string', 'status': 'string', 'taskDefinition': 'string', 'computedDesiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'loadBalancers': [ { 'targetGroupArn': 'string', 'loadBalancerName': 'string', 'containerName': 'string', 'containerPort': 123 }, ], 'serviceRegistries': [ { 'registryArn': 'string', 'port': 123, 'containerName': 'string', 'containerPort': 123 }, ], 'scale': { 'value': 123.0, 'unit': 'PERCENT' }, 'stabilityStatus': 'STEADY_STATE'|'STABILIZING', 'stabilityStatusAt': datetime(2015, 1, 1), 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' } }, ], 'deployments': [ { 'id': 'string', 'status': 'string', 'taskDefinition': 'string', 'desiredCount': 123, 'pendingCount': 123, 'runningCount': 123, 'failedTasks': 123, 'createdAt': datetime(2015, 1, 1), 'updatedAt': datetime(2015, 1, 1), 'capacityProviderStrategy': [ { 'capacityProvider': 'string', 'weight': 123, 'base': 123 }, ], 'launchType': 'EC2'|'FARGATE'|'EXTERNAL', 'platformVersion': 'string', 'platformFamily': 'string', 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'rolloutState': 'COMPLETED'|'FAILED'|'IN_PROGRESS', 'rolloutStateReason': 'string', 'serviceConnectConfiguration': { 'enabled': True|False, 'namespace': 'string', 'services': [ { 'portName': 'string', 'discoveryName': 'string', 'clientAliases': [ { 'port': 123, 'dnsName': 'string' }, ], 'ingressPortOverride': 123, 'timeout': { 'idleTimeoutSeconds': 123, 'perRequestTimeoutSeconds': 123 }, 'tls': { 'issuerCertificateAuthority': { 'awsPcaAuthorityArn': 'string' }, 'kmsKey': 'string', 'roleArn': 'string' } }, ], 'logConfiguration': { 'logDriver': 'json-file'|'syslog'|'journald'|'gelf'|'fluentd'|'awslogs'|'splunk'|'awsfirelens', 'options': { 'string': 'string' }, 'secretOptions': [ { 'name': 'string', 'valueFrom': 'string' }, ] } }, 'serviceConnectResources': [ { 'discoveryName': 'string', 'discoveryArn': 'string' }, ], 'volumeConfigurations': [ { 'name': 'string', 'managedEBSVolume': { 'encrypted': True|False, 'kmsKeyId': 'string', 'volumeType': 'string', 'sizeInGiB': 123, 'snapshotId': 'string', 'iops': 123, 'throughput': 123, 'tagSpecifications': [ { 'resourceType': 'volume', 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE' }, ], 'roleArn': 'string', 'filesystemType': 'ext3'|'ext4'|'xfs'|'ntfs' } }, ], 'fargateEphemeralStorage': { 'kmsKeyId': 'string' }, 'vpcLatticeConfigurations': [ { 'roleArn': 'string', 'targetGroupArn': 'string', 'portName': 'string' }, ] }, ], 'roleArn': 'string', 'events': [ { 'id': 'string', 'createdAt': datetime(2015, 1, 1), 'message': 'string' }, ], 'createdAt': datetime(2015, 1, 1), 'placementConstraints': [ { 'type': 'distinctInstance'|'memberOf', 'expression': 'string' }, ], 'placementStrategy': [ { 'type': 'random'|'spread'|'binpack', 'field': 'string' }, ], 'networkConfiguration': { 'awsvpcConfiguration': { 'subnets': [ 'string', ], 'securityGroups': [ 'string', ], 'assignPublicIp': 'ENABLED'|'DISABLED' } }, 'healthCheckGracePeriodSeconds': 123, 'schedulingStrategy': 'REPLICA'|'DAEMON', 'deploymentController': { 'type': 'ECS'|'CODE_DEPLOY'|'EXTERNAL' }, 'tags': [ { 'key': 'string', 'value': 'string' }, ], 'createdBy': 'string', 'enableECSManagedTags': True|False, 'propagateTags': 'TASK_DEFINITION'|'SERVICE'|'NONE', 'enableExecuteCommand': True|False } }Response Structure
(dict) --
service (dict) --
The full description of your service following the update call.
serviceArn (string) --
The ARN that identifies the service. For more information about the ARN format, see Amazon Resource Name (ARN) in the Amazon ECS Developer Guide.
serviceName (string) --
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. Service names must be unique within a cluster. However, you can have similarly named services in multiple clusters within a Region or across multiple Regions.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that hosts the service.
loadBalancers (list) --
A list of Elastic Load Balancing load balancer objects. It contains the load balancer name, the container name, and the container port to access from the load balancer. The container name is as it appears in a container definition.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this service. For more information, see Service Discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
status (string) --
The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE.
desiredCount (integer) --
The desired number of instantiations of the task definition to keep running on the service. This value is specified when the service is created with CreateService , and it can be modified with UpdateService.
runningCount (integer) --
The number of tasks in the cluster that are in the RUNNING state.
pendingCount (integer) --
The number of tasks in the cluster that are in the PENDING state.
launchType (string) --
The launch type the service is using. When using the DescribeServices API, this field is omitted if the service was created using a capacity provider strategy.
capacityProviderStrategy (list) --
The capacity provider strategy the service uses. When using the DescribeServices API, this field is omitted if the service was created using a launch type.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The platform version to run your service on. A platform version is only specified for tasks that are hosted on Fargate. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service run on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service (for example, LINUX).
taskDefinition (string) --
The task definition to use for tasks in the service. This value is specified when the service is created with CreateService, and it can be modified with UpdateService.
deploymentConfiguration (dict) --
Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
deploymentCircuitBreaker (dict) --
The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide
enable (boolean) --
Determines whether to use the deployment circuit breaker logic for the service.
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
maximumPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the maximumPercent parameter represents an upper limit on the number of your service's tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desiredCount (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the REPLICA service scheduler and has a desiredCount of four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default maximumPercent value for a service using the REPLICA service scheduler is 200%.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service.
minimumHealthyPercent (integer) --
If a service is using the rolling update ( ECS) deployment type, the minimumHealthyPercent represents a lower limit on the number of your service's tasks that must remain in the RUNNING state during a deployment, as a percentage of the desiredCount (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a desiredCount of four tasks and a minimumHealthyPercent of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.
For services that do not use a load balancer, the following should be noted:
A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a RUNNING state before the task is counted towards the minimum healthy percent total.
If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for minimumHealthyPercent is 100%. The default minimumHealthyPercent value for a service using the DAEMON service schedule is 0% for the CLI, the Amazon Web Services SDKs, and the APIs and 50% for the Amazon Web Services Management Console.
The minimum number of healthy tasks during a deployment is the desiredCount multiplied by the minimumHealthyPercent/100, rounded up to the nearest integer value.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in the RUNNING state while the container instances are in the DRAINING state.
If a service is using either the blue/green ( CODE_DEPLOY) or EXTERNAL deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
alarms (dict) --
Information about the CloudWatch alarms.
alarmNames (list) --
One or more CloudWatch alarm names. Use a "," to separate the alarms.
(string) --
rollback (boolean) --
Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
enable (boolean) --
Determines whether to use the CloudWatch alarm option in the service deployment process.
taskSets (list) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
(dict) --
Information about a set of Amazon ECS tasks in either an CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic.
id (string) --
The ID of the task set.
taskSetArn (string) --
The Amazon Resource Name (ARN) of the task set.
serviceArn (string) --
The Amazon Resource Name (ARN) of the service the task set exists in.
clusterArn (string) --
The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in.
startedBy (string) --
The tag specified when a task set is started. If an CodeDeploy deployment created the task set, the startedBy parameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used.
externalId (string) --
The external ID associated with the task set.
If an CodeDeploy deployment created a task set, the externalId parameter contains the CodeDeploy deployment ID.
If a task set is created for an external deployment and is associated with a service discovery registry, the externalId parameter contains the ECS_TASK_SET_EXTERNAL_ID Cloud Map attribute.
status (string) --
The status of the task set. The following describes each state.
PRIMARY
The task set is serving production traffic.
ACTIVE
The task set isn't serving production traffic.
DRAINING
The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group.
taskDefinition (string) --
The task definition that the task set is using.
computedDesiredCount (integer) --
The computed desired count for the task set. This is calculated by multiplying the service's desiredCount by the task set's scale percentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks.
pendingCount (integer) --
The number of tasks in the task set that are in the PENDING status during a deployment. A task in the PENDING state is preparing to enter the RUNNING state. A task set enters the PENDING status when it launches for the first time or when it's restarted after being in the STOPPED state.
runningCount (integer) --
The number of tasks in the task set that are in the RUNNING status during a deployment. A task in the RUNNING state is running and ready for use.
createdAt (datetime) --
The Unix timestamp for the time when the task set was created.
updatedAt (datetime) --
The Unix timestamp for the time when the task set was last updated.
launchType (string) --
The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide.
capacityProviderStrategy (list) --
The capacity provider strategy that are associated with the task set.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
platformVersion (string) --
The Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Fargate. For more information, see Fargate platform versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type.
All tasks in the set must have the same value.
networkConfiguration (dict) --
The network configuration for the task set.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
loadBalancers (list) --
Details on a load balancer that are used with a task set.
(dict) --
The load balancer configuration to use with a service or task set.
When you add, update, or remove a load balancer configuration, Amazon ECS starts a new deployment with the updated Elastic Load Balancing configuration. This causes tasks to register to and deregister from load balancers.
We recommend that you verify this on a test environment before you update the Elastic Load Balancing configuration.
A service-linked role is required for services that use multiple target groups. For more information, see Using service-linked roles in the Amazon Elastic Container Service Developer Guide.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the ECS deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide.
For services using the CODE_DEPLOY deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide.
loadBalancerName (string) --
The name of the load balancer to associate with the Amazon ECS service or task set.
If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
containerName (string) --
The name of the container (as it appears in a container definition) to associate with the load balancer.
You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
containerPort (integer) --
The port on the container to associate with the load balancer. This port must correspond to a containerPort in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on the hostPort of the port mapping.
serviceRegistries (list) --
The details for the service discovery registries to assign to this task set. For more information, see Service discovery.
(dict) --
The details for the service registry.
Each service may be associated with one service registry. Multiple service registries for each service are not supported.
When you add, update, or remove the service registries configuration, Amazon ECS starts a new deployment. New tasks are registered and deregistered to the updated service registry configuration.
registryArn (string) --
The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Cloud Map. For more information, see CreateService.
port (integer) --
The port value used if your service discovery service specified an SRV record. This field might be used if both the awsvpc network mode and SRV records are used.
containerName (string) --
The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition that your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
containerPort (integer) --
The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the bridge or host network mode, you must specify a containerName and containerPort combination from the task definition. If the task definition your service task specifies uses the awsvpc network mode and a type SRV DNS record is used, you must specify either a containerName and containerPort combination or a port value. However, you can't specify both.
scale (dict) --
A floating-point percentage of your desired number of tasks to place and keep running in the task set.
value (float) --
The value, specified as a percent total of a service's desiredCount, to scale the task set. Accepted values are numbers between 0 and 100.
unit (string) --
The unit of measure for the scale value.
stabilityStatus (string) --
The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set are in STEADY_STATE:
The task runningCount is equal to the computedDesiredCount.
The pendingCount is 0.
There are no tasks that are running on container instances in the DRAINING status.
All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks.
If any of those conditions aren't met, the stability status returns STABILIZING.
stabilityStatusAt (datetime) --
The Unix timestamp for the time when the task set stability status was retrieved.
tags (list) --
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the task set.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
deployments (list) --
The current state of deployments for the service.
(dict) --
The details of an Amazon ECS service deployment. This is used only when a service uses the ECS deployment controller type.
id (string) --
The ID of the deployment.
status (string) --
The status of the deployment. The following describes each state.
PRIMARY
The most recent deployment of a service.
ACTIVE
A service deployment that still has running tasks, but are in the process of being replaced with a new PRIMARY deployment.
INACTIVE
A deployment that has been completely replaced.
taskDefinition (string) --
The most recent task definition that was specified for the tasks in the service to use.
desiredCount (integer) --
The most recent desired count of tasks that was specified for the service to deploy or maintain.
pendingCount (integer) --
The number of tasks in the deployment that are in the PENDING status.
runningCount (integer) --
The number of tasks in the deployment that are in the RUNNING status.
failedTasks (integer) --
The number of consecutively failed tasks in the deployment. A task is considered a failure if the service scheduler can't launch the task, the task doesn't transition to a RUNNING state, or if it fails any of its defined health checks and is stopped.
createdAt (datetime) --
The Unix timestamp for the time when the service deployment was created.
updatedAt (datetime) --
The Unix timestamp for the time when the service deployment was last updated.
capacityProviderStrategy (list) --
The capacity provider strategy that the deployment is using.
(dict) --
The details of a capacity provider strategy. A capacity provider strategy can be set when using the RunTask <https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html>`__or `CreateCluster APIs or as the default capacity provider strategy for a cluster with the CreateCluster API.
Only capacity providers that are already associated with a cluster and have an ACTIVE or UPDATING status can be used in a capacity provider strategy. The PutClusterCapacityProviders API is used to associate a capacity provider with a cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New Auto Scaling group capacity providers can be created with the CreateClusterCapacityProvider API operation.
To use a Fargate capacity provider, specify either the FARGATE or FARGATE_SPOT capacity providers. The Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used in a capacity provider strategy.
With FARGATE_SPOT, you can run interruption tolerant tasks at a rate that's discounted compared to the FARGATE price. FARGATE_SPOT runs tasks on spare compute capacity. When Amazon Web Services needs the capacity back, your tasks are interrupted with a two-minute warning. FARGATE_SPOT supports Linux tasks with the X86_64 architecture on platform version 1.3.0 or later. FARGATE_SPOT supports Linux tasks with the ARM64 architecture on platform version 1.4.0 or later.
A capacity provider strategy may contain a maximum of 6 capacity providers.
capacityProvider (string) --
The short name of the capacity provider.
weight (integer) --
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
If no weight value is specified, the default value of 0 is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of 0 can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of 0, any RunTask or CreateService actions using the capacity provider strategy will fail.
An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
base (integer) --
The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
launchType (string) --
The launch type the tasks in the service are using. For more information, see Amazon ECS Launch Types in the Amazon Elastic Container Service Developer Guide.
platformVersion (string) --
The platform version that your tasks in the service run on. A platform version is only specified for tasks using the Fargate launch type. If one isn't specified, the LATEST platform version is used. For more information, see Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide.
platformFamily (string) --
The operating system that your tasks in the service, or tasks are running on. A platform family is specified only for tasks using the Fargate launch type.
All tasks that run as part of this service must use the same platformFamily value as the service, for example, LINUX..
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
rolloutState (string) --
The rollout state of the deployment. When a service deployment is started, it begins in an IN_PROGRESS state. When the service reaches a steady state, the deployment transitions to a COMPLETED state. If the service fails to reach a steady state and circuit breaker is turned on, the deployment transitions to a FAILED state. A deployment in FAILED state doesn't launch any new tasks. For more information, see DeploymentCircuitBreaker.
rolloutStateReason (string) --
A description of the rollout state of a deployment.
serviceConnectConfiguration (dict) --
The details of the Service Connect configuration that's used by this deployment. Compare the configuration between multiple deployments when troubleshooting issues with new deployments.
The configuration for this service to discover and connect to services, and be discovered by, and connected from, other services within a namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
enabled (boolean) --
Specifies whether to use Service Connect with this service.
namespace (string) --
The namespace name or full Amazon Resource Name (ARN) of the Cloud Map namespace for use with Service Connect. The namespace must be in the same Amazon Web Services Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about Cloud Map, see Working with Services in the Cloud Map Developer Guide.
services (list) --
The list of Service Connect service objects. These are names and aliases (also known as endpoints) that are used by other Amazon ECS services to connect to this service.
This field is not required for a "client" Amazon ECS service that's a member of a namespace only to connect to other services within the namespace. An example of this would be a frontend application that accepts incoming requests from either a load balancer that's attached to the service or by other means.
An object selects a port from the task definition, assigns a name for the Cloud Map service, and a list of aliases (endpoints) and ports for client applications to refer to this service.
(dict) --
The Service Connect service object configuration. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
portName (string) --
The portName must match the name of one of the portMappings from all the containers in the task definition of this Amazon ECS service.
discoveryName (string) --
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
clientAliases (list) --
The list of client aliases for this Service Connect service. You use these to assign names that can be used by client applications. The maximum number of client aliases that you can have in this list is 1.
Each alias ("endpoint") is a fully-qualified name and port number that other Amazon ECS tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
For each ServiceConnectService, you must provide at least one clientAlias with one port.
(dict) --
Each alias ("endpoint") is a fully-qualified name and port number that other tasks ("clients") can use to connect to this service.
Each name and port mapping must be unique within the namespace.
Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
port (integer) --
The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
dnsName (string) --
The dnsName is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.
If this parameter isn't specified, the default value of discoveryName.namespace is used. If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
To avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are database, db, or the lowercase name of a database, such as mysql or redis. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide.
ingressPortOverride (integer) --
The port number for the Service Connect proxy to listen on.
Use the value of this field to bypass the proxy for traffic on the port number specified in the named portMapping in the task definition of this application, and then use it in your VPC security groups to allow traffic into the proxy for this Amazon ECS service.
In awsvpc mode and Fargate, the default value is the container port number. The container port number is in the portMapping in the task definition. In bridge mode, the default value is the ephemeral port of the Service Connect proxy.
timeout (dict) --
A reference to an object that represents the configured timeouts for Service Connect.
idleTimeoutSeconds (integer) --
The amount of time in seconds a connection will stay active while idle. A value of 0 can be set to disable idleTimeout.
The idleTimeout default for HTTP/ HTTP2/ GRPC is 5 minutes.
The idleTimeout default for TCP is 1 hour.
perRequestTimeoutSeconds (integer) --
The amount of time waiting for the upstream to respond with a complete response per request. A value of 0 can be set to disable perRequestTimeout. perRequestTimeout can only be set if Service Connect appProtocol isn't TCP. Only idleTimeout is allowed for TCP appProtocol.
tls (dict) --
A reference to an object that represents a Transport Layer Security (TLS) configuration.
issuerCertificateAuthority (dict) --
The signer certificate authority.
awsPcaAuthorityArn (string) --
The ARN of the Amazon Web Services Private Certificate Authority certificate.
kmsKey (string) --
The Amazon Web Services Key Management Service key.
roleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that's associated with the Service Connect TLS.
logConfiguration (dict) --
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.
For tasks that are on Fargate, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
logDriver (string) --
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are awslogs, splunk, and awsfirelens.
For tasks hosted on Amazon EC2 instances, the supported log drivers are awslogs, fluentd, gelf, json-file, journald, syslog, splunk, and awsfirelens.
For more information about using the awslogs log driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens log driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner.
options (dict) --
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the awslogs log driver to route logs to Amazon CloudWatch include the following:
awslogs-create-group
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false.
Required: Yes
Specify the Amazon Web Services Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.
awslogs-group
Required: Yes
Make sure to specify a log group that the awslogs log driver sends its log streams to.
awslogs-stream-prefix
Required: Yes, when using the Fargate launch type.Optional for the EC2 launch type, required for the Fargate launch type.
Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id.
If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-format
Required: No
This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern.
This option is ignored if awslogs-datetime-format is also configured.
You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options.
Required: No
Valid values: non-blocking | blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the blocking mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.
If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver.
max-buffer-size
Required: No
Default value: 1m
When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the splunk log router, you need to specify a splunk-token and a splunk-url.
When you use the awsfirelens log router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set the log-driver-buffer-limit option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.
Other options you can specify when using awsfirelens to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region with region and a name for the log stream with delivery_stream.
When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with region and a data stream name with stream.
When you export logs to Amazon OpenSearch Service, you can specify options like Name, Host (OpenSearch Service endpoint without protocol), Port, Index, Type, Aws_auth, Aws_region, Suppress_Type_Name, and tls.
When you export logs to Amazon S3, you can specify the bucket using the bucket option. You can also specify region, total_file_size, upload_timeout, and use_put_object as options.
This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
(string) --
(string) --
secretOptions (list) --
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
(dict) --
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the secrets container definition parameter.
To reference sensitive information in the log configuration of a container, use the secretOptions container definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
name (string) --
The name of the secret.
valueFrom (string) --
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide.
serviceConnectResources (list) --
The list of Service Connect resources that are associated with this deployment. Each list entry maps a discovery name to a Cloud Map service name.
(dict) --
The Service Connect resource. Each configuration maps a discovery name to a Cloud Map service name. The data is stored in Cloud Map as part of the Service Connect configuration for each discovery name of this Amazon ECS service.
A task can resolve the dnsName for each of the clientAliases of a service. However a task can't resolve the discovery names. If you want to connect to a service, refer to the ServiceConnectConfiguration of that service for the list of clientAliases that you can use.
discoveryName (string) --
The discovery name of this Service Connect resource.
The discoveryName is the name of the new Cloud Map service that Amazon ECS creates for this Amazon ECS service. This must be unique within the Cloud Map namespace. The name can contain up to 64 characters. The name can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.
If the discoveryName isn't specified, the port mapping name from the task definition is used in portName.namespace.
discoveryArn (string) --
The Amazon Resource Name (ARN) for the namespace in Cloud Map that matches the discovery name for this Service Connect resource. You can use this ARN in other integrations with Cloud Map. However, Service Connect can't ensure connectivity outside of Amazon ECS.
volumeConfigurations (list) --
The details of the volume that was configuredAtLaunch. You can configure different settings like the size, throughput, volumeType, and ecryption in ServiceManagedEBSVolumeConfiguration. The name of the volume must match the name from the task definition.
(dict) --
The configuration for a volume specified in the task definition as a volume that is configured at launch time. Currently, the only supported volume type is an Amazon EBS volume.
name (string) --
The name of the volume. This value must match the volume name from the Volume object in the task definition.
managedEBSVolume (dict) --
The configuration for the Amazon EBS volume that Amazon ECS creates and manages on your behalf. These settings are used to create each Amazon EBS volume, with one volume created for each task in the service. The Amazon EBS volumes are visible in your account in the Amazon EC2 console once they are created.
encrypted (boolean) --
Indicates whether the volume should be encrypted. If no value is specified, encryption is turned on by default. This parameter maps 1:1 with the Encrypted parameter of the CreateVolume API in the Amazon EC2 API Reference.
kmsKeyId (string) --
The Amazon Resource Name (ARN) identifier of the Amazon Web Services Key Management Service key to use for Amazon EBS encryption. When encryption is turned on and no Amazon Web Services Key Management Service key is specified, the default Amazon Web Services managed key for Amazon EBS volumes is used. This parameter maps 1:1 with the KmsKeyId parameter of the CreateVolume API in the Amazon EC2 API Reference.
volumeType (string) --
The volume type. This parameter maps 1:1 with the VolumeType parameter of the CreateVolume API in the Amazon EC2 API Reference. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide.
The following are the supported volume types.
General Purpose SSD: gp2``| ``gp3
Provisioned IOPS SSD: io1``| ``io2
Throughput Optimized HDD: st1
Cold HDD: sc1
Magnetic: standard
sizeInGiB (integer) --
The size of the volume in GiB. You must specify either a volume size or a snapshot ID. If you specify a snapshot ID, the snapshot size is used for the volume size by default. You can optionally specify a volume size greater than or equal to the snapshot size. This parameter maps 1:1 with the Size parameter of the CreateVolume API in the Amazon EC2 API Reference.
The following are the supported volume size values for each volume type.
gp2 and gp3: 1-16,384
io1 and io2: 4-16,384
st1 and sc1: 125-16,384
standard: 1-1,024
snapshotId (string) --
The snapshot that Amazon ECS uses to create the volume. You must specify either a snapshot ID or a volume size. This parameter maps 1:1 with the SnapshotId parameter of the CreateVolume API in the Amazon EC2 API Reference.
iops (integer) --
The number of I/O operations per second (IOPS). For gp3, io1, and io2 volumes, this represents the number of IOPS that are provisioned for the volume. For gp2 volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting.
The following are the supported values for each volume type.
gp3: 3,000 - 16,000 IOPS
io1: 100 - 64,000 IOPS
io2: 100 - 256,000 IOPS
This parameter is required for io1 and io2 volume types. The default for gp3 volumes is 3,000 IOPS. This parameter is not supported for st1, sc1, or standard volume types.
This parameter maps 1:1 with the Iops parameter of the CreateVolume API in the Amazon EC2 API Reference.
throughput (integer) --
The throughput to provision for a volume, in MiB/s, with a maximum of 1,000 MiB/s. This parameter maps 1:1 with the Throughput parameter of the CreateVolume API in the Amazon EC2 API Reference.
tagSpecifications (list) --
The tags to apply to the volume. Amazon ECS applies service-managed tags by default. This parameter maps 1:1 with the TagSpecifications.N parameter of the CreateVolume API in the Amazon EC2 API Reference.
(dict) --
The tag specifications of an Amazon EBS volume.
resourceType (string) --
The type of volume resource.
tags (list) --
The tags applied to this Amazon EBS volume. AmazonECSCreated and AmazonECSManaged are reserved tags that can't be used.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
propagateTags (string) --
Determines whether to propagate the tags from the task definition to the Amazon EBS volume. Tags can only propagate to a SERVICE specified in
ServiceVolumeConfiguration. If no value is specified, the tags aren't propagated.
roleArn (string) --
The ARN of the IAM role to associate with this volume. This is the Amazon ECS infrastructure IAM role that is used to manage your Amazon Web Services infrastructure. We recommend using the Amazon ECS-managed AmazonECSInfrastructureRolePolicyForVolumes IAM policy with this role. For more information, see Amazon ECS infrastructure IAM role in the Amazon ECS Developer Guide.
filesystemType (string) --
The filesystem type for the volume. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created. If there is a filesystem type mismatch, the task will fail to start.
The available Linux filesystem types are
ext3, ext4, and xfs. If no value is specified, the xfs filesystem type is used by default.
The available Windows filesystem types are NTFS.
fargateEphemeralStorage (dict) --
The Fargate ephemeral storage settings for the deployment.
kmsKeyId (string) --
Specify an Key Management Service key ID to encrypt the ephemeral storage for deployment.
vpcLatticeConfigurations (list) --
The VPC Lattice configuration for the service deployment.
(dict) --
The VPC Lattice configuration for your service that holds the information for the target group(s) Amazon ECS tasks will be registered to.
roleArn (string) --
The ARN of the IAM role to associate with this VPC Lattice configuration. This is the Amazon ECS infrastructure IAM role that is used to manage your VPC Lattice infrastructure.
targetGroupArn (string) --
The full Amazon Resource Name (ARN) of the target group or groups associated with the VPC Lattice configuration that the Amazon ECS tasks will be registered to.
portName (string) --
The name of the port mapping to register in the VPC Lattice target group. This is the name of the portMapping you defined in your task definition.
roleArn (string) --
The ARN of the IAM role that's associated with the service. It allows the Amazon ECS container agent to register container instances with an Elastic Load Balancing load balancer.
events (list) --
The event stream for your service. A maximum of 100 of the latest events are displayed.
(dict) --
The details for an event that's associated with a service.
id (string) --
The ID string for the event.
createdAt (datetime) --
The Unix timestamp for the time when the event was triggered.
message (string) --
The event message.
createdAt (datetime) --
The Unix timestamp for the time when the service was created.
placementConstraints (list) --
The placement constraints for the tasks in the service.
(dict) --
An object representing a constraint on task placement. For more information, see Task placement constraints in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
expression (string) --
A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is distinctInstance. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
placementStrategy (list) --
The placement strategy that determines how tasks for the service are placed.
(dict) --
The task placement strategy for a task or service. For more information, see Task placement strategies in the Amazon Elastic Container Service Developer Guide.
type (string) --
The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that's specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task.
field (string) --
The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that's applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
networkConfiguration (dict) --
The VPC subnet and security group configuration for tasks that receive their own elastic network interface by using the awsvpc networking mode.
awsvpcConfiguration (dict) --
The VPC subnets and security groups that are associated with a task.
subnets (list) --
The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per awsvpcConfiguration.
(string) --
securityGroups (list) --
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per awsvpcConfiguration.
(string) --
assignPublicIp (string) --
Whether the task's elastic network interface receives a public IP address. The default value is DISABLED.
healthCheckGracePeriodSeconds (integer) --
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started.
schedulingStrategy (string) --
The scheduling strategy to use for the service. For more information, see Services.
There are two service scheduler strategies available.
REPLICA-The replica scheduling strategy places and maintains the desired number of tasks across your cluster. By default, the service scheduler spreads tasks across Availability Zones. You can use task placement strategies and constraints to customize task placement decisions.
DAEMON-The daemon scheduling strategy deploys exactly one task on each active container instance. This task meets all of the task placement constraints that you specify in your cluster. The service scheduler also evaluates the task placement constraints for running tasks. It stop tasks that don't meet the placement constraints.
deploymentController (dict) --
The deployment controller type the service is using.
type (string) --
The deployment controller type to use.
There are three deployment controller types available:
ECS
The rolling update ( ECS) deployment type involves replacing the current running version of the container with the latest version. The number of containers Amazon ECS adds or removes from the service during a rolling update is controlled by adjusting the minimum and maximum number of healthy tasks allowed during a service deployment, as specified in the DeploymentConfiguration.
CODE_DEPLOY
The blue/green ( CODE_DEPLOY) deployment type uses the blue/green deployment model powered by CodeDeploy, which allows you to verify a new deployment of a service before sending production traffic to it.
EXTERNAL
The external ( EXTERNAL) deployment type enables you to use any third-party deployment controller for full control over the deployment process for an Amazon ECS service.
tags (list) --
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value. You define bot the key and value.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
(dict) --
The metadata that you apply to a resource to help you categorize and organize them. Each tag consists of a key and an optional value. You define them.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
key (string) --
One part of a key-value pair that make up a tag. A key is a general label that acts like a category for more specific tag values.
value (string) --
The optional part of a key-value pair that make up a tag. A value acts as a descriptor within a tag category (key).
createdBy (string) --
The principal that created the service.
enableECSManagedTags (boolean) --
Determines whether to use Amazon ECS managed tags for the tasks in the service. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
propagateTags (string) --
Determines whether to propagate the tags from the task definition or the service to the task. If no value is specified, the tags aren't propagated.
enableExecuteCommand (boolean) --
Determines whether the execute command functionality is turned on for the service. If true, the execute command functionality is turned on for all containers in tasks as part of the service.