Between 11:49 PM PDT on October 19 and 2:24 AM PDT on October 20, Amazon experienced increased error rates and latencies for AWS Services in the US-EAST-1 Region. Additionally, services or features that rely on US-EAST-1 endpoints such as IAM and DynamoDB Global Tables also experienced issues during this time. At 12:26 AM on October 20, Amazon identified the trigger of the event as DNS resolution issues for the regional DynamoDB service endpoints. After resolving the DynamoDB DNS issue at 2:24 AM, services began recovering but Amazon had a subsequent impairment in the internal subsystem of EC2 that is responsible for launching EC2 instances due to its dependency on DynamoDB. As Amazon continued to work through EC2 instance launch impairments, Network Load Balancer health checks also became impaired, resulting in network connectivity issues in multiple services such as Lambda, DynamoDB, and CloudWatch. Amazon recovered the Network Load Balancer health checks at 9:38 AM. As part of the recovery effort, Amazon temporarily throttled some operations such as EC2 instance launches, processing of SQS queues via Lambda Event Source Mappings, and asynchronous Lambda invocations. Over time Amazon reduced throttling of operations and worked in parallel to resolve network connectivity issues until the services fully recovered. By 3:01 PM on October 20th, all AWS services returned to normal operations.