Back to overview
Degraded

Google Cloud APIs Degradation

Jun 12 at 07:27pm BST
Affected services
API
API

Resolved
Jun 14 at 12:05pm BST

Google has published their full incident report on https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW

We are sharing it also here for your convenience:


Incident Report

Summary

Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers.

We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.

What happened?

Google and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers.

On May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging.

On June 12, 2025 at ~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment.

Within 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out ~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first.

Within some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to ~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture.

What is our immediate path forward?

Immediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system.

How did we communicate?

We posted our first incident report to Cloud Service Health about ~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward.

What’s our approach moving forward?

Beyond freezing the system as mentioned above, we will prioritize and safely complete the following:

  • We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests.
  • We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues.
  • We will enforce all changes to critical binaries to be feature flag protected and disabled by default.
  • We will improve our static analysis and testing practices to correctly handle errors and if need be fail open.
  • We will audit and ensure our systems employ randomized exponential backoff.
  • We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers.
  • We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity.

Updated
Jun 12 at 10:45pm BST

Most of Google Cloud services have resolved and with them, our services have fully resolved now.

Updated
Jun 12 at 09:13pm BST

Ventrata services have mostly recovered, and we are seeing stable performance across core systems. Google Cloud engineers are still working to fully mitigate the underlying issue on their side.

We continue to monitor closely and will provide further updates as needed.

Updated
Jun 12 at 08:26pm BST

Google Cloud engineers are actively working to mitigate the ongoing issue. Recovery has been observed in some regions, but full service restoration is still in progress.

At this time, there is no estimated time for full resolution. We continue to monitor closely and will provide updates as more information becomes available.

You can follow official updates directly from Google Cloud status page here:
https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yP

Updated
Jun 12 at 07:49pm BST

Google Cloud has acknowledged the ongoing global outage. You can follow official updates directly from their status page here:

https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yPDJ8y1ar4

Updated
Jun 12 at 07:45pm BST

We’ve identified that a current outage affecting Google Cloud is causing issues with our web checkout orders and payment processing. As a result, certain transactions may not be completing successfully.

We are actively monitoring the situation and working to assess the full impact. Further updates will be posted here as we learn more.

Created
Jun 12 at 07:27pm BST

We are currently investigating a reported degradation affecting Google Cloud APIs. While our systems remain operational at this time, there may be indirect impacts.

We are monitoring the situation closely and will provide updates as we learn more.

Thank you for your patience.