AWS Virginia overheating disrupts US-East; Coinbase, FanDuel
An AWS data center in Northern Virginia overheated, knocking out the US‑East‑1 use1-az4 Availability Zone and disrupting Coinbase trading for more than five hours while affecting FanDuel and CME.
An overheating event at an Amazon Web Services data center in Northern Virginia caused an outage in the US‑East‑1 use1-az4 Availability Zone, interrupting services for customers hosted in the region. The incident disrupted Coinbase trading for more than five hours and affected users of FanDuel and the CME Group, among other customers.
AWS reported the issue as a thermal event that increased temperatures within a single data center and, in some cases, impaired instances in the affected Availability Zone. The provider shifted traffic away from the impacted zone while teams worked to restore cooling and recover infrastructure. AWS wrote in a status update that “EC2 instances and EBS volumes hosted on impacted hardware are affected by the loss of power during the thermal event.” The company said mitigation was taking longer than expected and that it did not have an estimated time for full recovery.
Coinbase reported disruptions to core exchange functions that lasted more than five hours and warned some users could see delayed Solana network sends and receives as well as issues with ALEO. The exchange later announced trading services were restored, posting that “All markets have been re-enabled for trading on coinbase.com and in the Coinbase iOS and Android apps. Coinbase customers can log in to trade.”
FanDuel posted on X that its team was investigating technical problems that were preventing users from accessing the platform. The CME Group reported platform impacts tied to the AWS outage. Other companies and services hosted in the US‑East‑1 region experienced interruptions as AWS worked to recover affected hardware.
AWS provided a partial list of services it had restored, noting that AWS IoT Core, NAT Gateway, Amazon Elastic Kubernetes Service, Elastic Load Balancing and Amazon Redshift were back online. The provider said some services remained impacted at the time of its update, including Amazon ElastiCache, Amazon Managed Streaming for Apache Kafka, Amazon OpenSearch Service and Amazon SageMaker. AWS warned that some customers would continue to see impaired EC2 instances and EBS volumes until recovery was complete.
The US‑East‑1 region is one of AWS’s busiest and runs a wide range of consumer and enterprise workloads. AWS did not provide details on what triggered the overheating, only that additional cooling capacity was being brought online in a controlled manner to prevent further hardware damage.
Last year, a separate global AWS outage traced to a DNS fault disrupted hundreds of applications and websites. Customers affected by the current incident were advised to monitor AWS service health updates and follow their own incident response plans while AWS continued recovery work.



