Incidents | Ventrata Incidents reported on status page for Ventrata https://status.ventrata.com/ https://d1lppblt9t2x15.cloudfront.net/logos/0280d9a36b95e0b5a472f9c9b55132af.png Incidents | Ventrata https://status.ventrata.com/ en Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 08:49:00 -0000 https://status.ventrata.com/incident/609439#0b1f7ce97bb23735423ae23c5bcd7f1f898f93e47d0481b0223b301d563c30e2 Google Cloud Run customers experienced a service disruption Start: 26 June 2025 at 07:16 UTC+1 Symptom identified: 26 June 2025 at 08:32 UTC+1 Resolved: 26 June 2025 at 08:42 UTC+1 Impact: Some of our components that rely on Google Cloud Run were temporarily impacted due to an upstream issue. This may have caused brief service interruptions for a subset of users. Root Cause: The disruption was related to a misconfiguration in the networking infrastructure supporting Cloud Run, which has since been addressed. Resolution: Google Cloud engineers identified and corrected the issue, and services were restored to normal operation shortly thereafter. Current Status: ✅ Fully resolved. All systems are operating normally. Thank you for your patience while we monitored and awaited resolution from our cloud provider. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 08:49:00 -0000 https://status.ventrata.com/incident/609439#0b1f7ce97bb23735423ae23c5bcd7f1f898f93e47d0481b0223b301d563c30e2 Google Cloud Run customers experienced a service disruption Start: 26 June 2025 at 07:16 UTC+1 Symptom identified: 26 June 2025 at 08:32 UTC+1 Resolved: 26 June 2025 at 08:42 UTC+1 Impact: Some of our components that rely on Google Cloud Run were temporarily impacted due to an upstream issue. This may have caused brief service interruptions for a subset of users. Root Cause: The disruption was related to a misconfiguration in the networking infrastructure supporting Cloud Run, which has since been addressed. Resolution: Google Cloud engineers identified and corrected the issue, and services were restored to normal operation shortly thereafter. Current Status: ✅ Fully resolved. All systems are operating normally. Thank you for your patience while we monitored and awaited resolution from our cloud provider. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 08:49:00 -0000 https://status.ventrata.com/incident/609439#0b1f7ce97bb23735423ae23c5bcd7f1f898f93e47d0481b0223b301d563c30e2 Google Cloud Run customers experienced a service disruption Start: 26 June 2025 at 07:16 UTC+1 Symptom identified: 26 June 2025 at 08:32 UTC+1 Resolved: 26 June 2025 at 08:42 UTC+1 Impact: Some of our components that rely on Google Cloud Run were temporarily impacted due to an upstream issue. This may have caused brief service interruptions for a subset of users. Root Cause: The disruption was related to a misconfiguration in the networking infrastructure supporting Cloud Run, which has since been addressed. Resolution: Google Cloud engineers identified and corrected the issue, and services were restored to normal operation shortly thereafter. Current Status: ✅ Fully resolved. All systems are operating normally. Thank you for your patience while we monitored and awaited resolution from our cloud provider. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 08:49:00 -0000 https://status.ventrata.com/incident/609439#0b1f7ce97bb23735423ae23c5bcd7f1f898f93e47d0481b0223b301d563c30e2 Google Cloud Run customers experienced a service disruption Start: 26 June 2025 at 07:16 UTC+1 Symptom identified: 26 June 2025 at 08:32 UTC+1 Resolved: 26 June 2025 at 08:42 UTC+1 Impact: Some of our components that rely on Google Cloud Run were temporarily impacted due to an upstream issue. This may have caused brief service interruptions for a subset of users. Root Cause: The disruption was related to a misconfiguration in the networking infrastructure supporting Cloud Run, which has since been addressed. Resolution: Google Cloud engineers identified and corrected the issue, and services were restored to normal operation shortly thereafter. Current Status: ✅ Fully resolved. All systems are operating normally. Thank you for your patience while we monitored and awaited resolution from our cloud provider. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 07:15:00 -0000 https://status.ventrata.com/incident/609439#e138b60ecdfc1125c4737a430238298191a48b2ea7e386cdf8a7b96f69ee37d5 The intermittent issues affecting Checkout, CMS, and OCTO API have been resolved, and systems are returning to normal. We’re continuing to monitor stability and will share an incident summary shortly. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 07:15:00 -0000 https://status.ventrata.com/incident/609439#e138b60ecdfc1125c4737a430238298191a48b2ea7e386cdf8a7b96f69ee37d5 The intermittent issues affecting Checkout, CMS, and OCTO API have been resolved, and systems are returning to normal. We’re continuing to monitor stability and will share an incident summary shortly. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 07:15:00 -0000 https://status.ventrata.com/incident/609439#e138b60ecdfc1125c4737a430238298191a48b2ea7e386cdf8a7b96f69ee37d5 The intermittent issues affecting Checkout, CMS, and OCTO API have been resolved, and systems are returning to normal. We’re continuing to monitor stability and will share an incident summary shortly. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 07:15:00 -0000 https://status.ventrata.com/incident/609439#e138b60ecdfc1125c4737a430238298191a48b2ea7e386cdf8a7b96f69ee37d5 The intermittent issues affecting Checkout, CMS, and OCTO API have been resolved, and systems are returning to normal. We’re continuing to monitor stability and will share an incident summary shortly. CMS recovered https://status.ventrata.com/ Thu, 26 Jun 2025 06:59:16 +0000 https://status.ventrata.com/#c63e2f5fdc08da40141003e2240976a54269a1ea7228ebaa4ffa76e9836c1440 CMS recovered API recovered https://status.ventrata.com/ Thu, 26 Jun 2025 06:59:05 +0000 https://status.ventrata.com/#78aee06e51a856a83c12f4217f45e1ccc25bbe60d195cbb52386c2938210abc5 API recovered Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 06:50:00 -0000 https://status.ventrata.com/incident/609439#db41c862ce6a5ee53a36d6c6c6ff1571649604f9f6abe3818b4e252f13f85aa9 We're currently experiencing intermittent disruptions impacting the Checkout, CMS, and OCTO API products. This may result in degraded performance for select customers. Our engineering teams are actively working to identify the root cause and implement a fix. We’ll provide updates here as we make progress. Thank you for your patience. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 06:50:00 -0000 https://status.ventrata.com/incident/609439#db41c862ce6a5ee53a36d6c6c6ff1571649604f9f6abe3818b4e252f13f85aa9 We're currently experiencing intermittent disruptions impacting the Checkout, CMS, and OCTO API products. This may result in degraded performance for select customers. Our engineering teams are actively working to identify the root cause and implement a fix. We’ll provide updates here as we make progress. Thank you for your patience. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 06:50:00 -0000 https://status.ventrata.com/incident/609439#db41c862ce6a5ee53a36d6c6c6ff1571649604f9f6abe3818b4e252f13f85aa9 We're currently experiencing intermittent disruptions impacting the Checkout, CMS, and OCTO API products. This may result in degraded performance for select customers. Our engineering teams are actively working to identify the root cause and implement a fix. We’ll provide updates here as we make progress. Thank you for your patience. Intermittent Issues with Checkout, CMS and OCTO API https://status.ventrata.com/incident/609439 Thu, 26 Jun 2025 06:50:00 -0000 https://status.ventrata.com/incident/609439#db41c862ce6a5ee53a36d6c6c6ff1571649604f9f6abe3818b4e252f13f85aa9 We're currently experiencing intermittent disruptions impacting the Checkout, CMS, and OCTO API products. This may result in degraded performance for select customers. Our engineering teams are actively working to identify the root cause and implement a fix. We’ll provide updates here as we make progress. Thank you for your patience. CMS went down https://status.ventrata.com/ Thu, 26 Jun 2025 06:16:36 +0000 https://status.ventrata.com/#c63e2f5fdc08da40141003e2240976a54269a1ea7228ebaa4ffa76e9836c1440 CMS went down API went down https://status.ventrata.com/ Thu, 26 Jun 2025 06:16:27 +0000 https://status.ventrata.com/#78aee06e51a856a83c12f4217f45e1ccc25bbe60d195cbb52386c2938210abc5 API went down Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Sat, 14 Jun 2025 11:05:00 -0000 https://status.ventrata.com/incident/601828#acaa5d282feeb94d5a846ecafe28071e16fc0f5fb8c866c29efe0431bf8e81d2 Google has published their full incident report on https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW We are sharing it also here for your convenience: ---------------------- ## Incident Report ### Summary Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers. We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward. ### What happened? Google and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers. On May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging. On June 12, 2025 at ~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment. Within 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out ~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first. Within some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to ~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture. ### What is our immediate path forward? Immediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system. ### How did we communicate? We posted our first incident report to Cloud Service Health about ~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward. ### What’s our approach moving forward? Beyond freezing the system as mentioned above, we will prioritize and safely complete the following: * We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests. * We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues. * We will enforce all changes to critical binaries to be feature flag protected and disabled by default. * We will improve our static analysis and testing practices to correctly handle errors and if need be fail open. * We will audit and ensure our systems employ randomized exponential backoff. * We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers. * We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Sat, 14 Jun 2025 11:05:00 -0000 https://status.ventrata.com/incident/601828#acaa5d282feeb94d5a846ecafe28071e16fc0f5fb8c866c29efe0431bf8e81d2 Google has published their full incident report on https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW We are sharing it also here for your convenience: ---------------------- ## Incident Report ### Summary Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers. We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward. ### What happened? Google and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers. On May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging. On June 12, 2025 at ~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment. Within 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out ~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first. Within some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to ~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture. ### What is our immediate path forward? Immediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system. ### How did we communicate? We posted our first incident report to Cloud Service Health about ~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward. ### What’s our approach moving forward? Beyond freezing the system as mentioned above, we will prioritize and safely complete the following: * We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests. * We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues. * We will enforce all changes to critical binaries to be feature flag protected and disabled by default. * We will improve our static analysis and testing practices to correctly handle errors and if need be fail open. * We will audit and ensure our systems employ randomized exponential backoff. * We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers. * We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Sat, 14 Jun 2025 11:05:00 -0000 https://status.ventrata.com/incident/601828#acaa5d282feeb94d5a846ecafe28071e16fc0f5fb8c866c29efe0431bf8e81d2 Google has published their full incident report on https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW We are sharing it also here for your convenience: ---------------------- ## Incident Report ### Summary Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers. We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward. ### What happened? Google and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers. On May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging. On June 12, 2025 at ~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment. Within 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out ~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first. Within some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to ~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture. ### What is our immediate path forward? Immediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system. ### How did we communicate? We posted our first incident report to Cloud Service Health about ~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward. ### What’s our approach moving forward? Beyond freezing the system as mentioned above, we will prioritize and safely complete the following: * We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests. * We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues. * We will enforce all changes to critical binaries to be feature flag protected and disabled by default. * We will improve our static analysis and testing practices to correctly handle errors and if need be fail open. * We will audit and ensure our systems employ randomized exponential backoff. * We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers. * We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 21:45:00 -0000 https://status.ventrata.com/incident/601828#edb022e543f03a9defc08d6cb3ed778945d33e287c4983d18a0d472cbc6d5fec Most of Google Cloud services have resolved and with them, our services have fully resolved now. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 21:45:00 -0000 https://status.ventrata.com/incident/601828#edb022e543f03a9defc08d6cb3ed778945d33e287c4983d18a0d472cbc6d5fec Most of Google Cloud services have resolved and with them, our services have fully resolved now. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 21:45:00 -0000 https://status.ventrata.com/incident/601828#edb022e543f03a9defc08d6cb3ed778945d33e287c4983d18a0d472cbc6d5fec Most of Google Cloud services have resolved and with them, our services have fully resolved now. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 20:13:00 -0000 https://status.ventrata.com/incident/601828#f9b6951cd0ee1707f8168eb29c3059ceacdcec3d8919558e0b8ceaaf0b57841f Ventrata services have mostly recovered, and we are seeing stable performance across core systems. Google Cloud engineers are still working to fully mitigate the underlying issue on their side. We continue to monitor closely and will provide further updates as needed. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 20:13:00 -0000 https://status.ventrata.com/incident/601828#f9b6951cd0ee1707f8168eb29c3059ceacdcec3d8919558e0b8ceaaf0b57841f Ventrata services have mostly recovered, and we are seeing stable performance across core systems. Google Cloud engineers are still working to fully mitigate the underlying issue on their side. We continue to monitor closely and will provide further updates as needed. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 20:13:00 -0000 https://status.ventrata.com/incident/601828#f9b6951cd0ee1707f8168eb29c3059ceacdcec3d8919558e0b8ceaaf0b57841f Ventrata services have mostly recovered, and we are seeing stable performance across core systems. Google Cloud engineers are still working to fully mitigate the underlying issue on their side. We continue to monitor closely and will provide further updates as needed. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 19:26:00 -0000 https://status.ventrata.com/incident/601828#8c9177a973f313333fedb0ceb0b89e347bea0f6a27d6cc10eaa0657dbdcfde0e Google Cloud engineers are actively working to mitigate the ongoing issue. Recovery has been observed in some regions, but full service restoration is still in progress. At this time, there is no estimated time for full resolution. We continue to monitor closely and will provide updates as more information becomes available. You can follow official updates directly from Google Cloud status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yP Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 19:26:00 -0000 https://status.ventrata.com/incident/601828#8c9177a973f313333fedb0ceb0b89e347bea0f6a27d6cc10eaa0657dbdcfde0e Google Cloud engineers are actively working to mitigate the ongoing issue. Recovery has been observed in some regions, but full service restoration is still in progress. At this time, there is no estimated time for full resolution. We continue to monitor closely and will provide updates as more information becomes available. You can follow official updates directly from Google Cloud status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yP Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 19:26:00 -0000 https://status.ventrata.com/incident/601828#8c9177a973f313333fedb0ceb0b89e347bea0f6a27d6cc10eaa0657dbdcfde0e Google Cloud engineers are actively working to mitigate the ongoing issue. Recovery has been observed in some regions, but full service restoration is still in progress. At this time, there is no estimated time for full resolution. We continue to monitor closely and will provide updates as more information becomes available. You can follow official updates directly from Google Cloud status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yP Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:49:00 -0000 https://status.ventrata.com/incident/601828#3839e2d4e2ecf8ac79f71e0e6415a0354c19cc8bebb38ac270e432a248b66028 Google Cloud has acknowledged the ongoing global outage. You can follow official updates directly from their status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yPDJ8y1ar4 Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:49:00 -0000 https://status.ventrata.com/incident/601828#3839e2d4e2ecf8ac79f71e0e6415a0354c19cc8bebb38ac270e432a248b66028 Google Cloud has acknowledged the ongoing global outage. You can follow official updates directly from their status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yPDJ8y1ar4 Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:49:00 -0000 https://status.ventrata.com/incident/601828#3839e2d4e2ecf8ac79f71e0e6415a0354c19cc8bebb38ac270e432a248b66028 Google Cloud has acknowledged the ongoing global outage. You can follow official updates directly from their status page here: https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW#2c2sBHWU84yPDJ8y1ar4 Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:45:00 -0000 https://status.ventrata.com/incident/601828#7ddef3e809c932c6d625217fdc880df94babbdabb8e04d915191f786729aaad2 We’ve identified that a current outage affecting Google Cloud is causing issues with our web checkout orders and payment processing. As a result, certain transactions may not be completing successfully. We are actively monitoring the situation and working to assess the full impact. Further updates will be posted here as we learn more. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:45:00 -0000 https://status.ventrata.com/incident/601828#7ddef3e809c932c6d625217fdc880df94babbdabb8e04d915191f786729aaad2 We’ve identified that a current outage affecting Google Cloud is causing issues with our web checkout orders and payment processing. As a result, certain transactions may not be completing successfully. We are actively monitoring the situation and working to assess the full impact. Further updates will be posted here as we learn more. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:45:00 -0000 https://status.ventrata.com/incident/601828#7ddef3e809c932c6d625217fdc880df94babbdabb8e04d915191f786729aaad2 We’ve identified that a current outage affecting Google Cloud is causing issues with our web checkout orders and payment processing. As a result, certain transactions may not be completing successfully. We are actively monitoring the situation and working to assess the full impact. Further updates will be posted here as we learn more. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:27:00 -0000 https://status.ventrata.com/incident/601828#c8514490d00785aa04faa3e6884e97ec4b6f1b216db223d470c59afd6c0bf568 We are currently investigating a reported degradation affecting Google Cloud APIs. While our systems remain operational at this time, there may be indirect impacts. We are monitoring the situation closely and will provide updates as we learn more. Thank you for your patience. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:27:00 -0000 https://status.ventrata.com/incident/601828#c8514490d00785aa04faa3e6884e97ec4b6f1b216db223d470c59afd6c0bf568 We are currently investigating a reported degradation affecting Google Cloud APIs. While our systems remain operational at this time, there may be indirect impacts. We are monitoring the situation closely and will provide updates as we learn more. Thank you for your patience. Google Cloud APIs Degradation https://status.ventrata.com/incident/601828 Thu, 12 Jun 2025 18:27:00 -0000 https://status.ventrata.com/incident/601828#c8514490d00785aa04faa3e6884e97ec4b6f1b216db223d470c59afd6c0bf568 We are currently investigating a reported degradation affecting Google Cloud APIs. While our systems remain operational at this time, there may be indirect impacts. We are monitoring the situation closely and will provide updates as we learn more. Thank you for your patience. Viator Self-Mapping Functionality Temporarily Unavailable https://status.ventrata.com/incident/571239 Mon, 19 May 2025 11:38:00 -0000 https://status.ventrata.com/incident/571239#d9c9b7ec05a23f19104c50b757a3ddd4dbe23de88c0e0886a95edb43c9021cd1 We’re happy to share that the issue affecting Viator Self-Mapping in the Ventrata interface has now been resolved. Our team addressed the recent change on Viator’s side, and the product sync is now functioning as expected. Users can once again map and remap products directly from the Ventrata interface without issue. 🛠 No further action is needed—you can resume normal use of the self-mapping functionality. 📩 Need help? If you continue to experience any issues, please don’t hesitate to reach out via live chat or email us at support@ventrata.com. Thank you for your patience! Viator Self-Mapping Functionality Temporarily Unavailable https://status.ventrata.com/incident/571239 Mon, 19 May 2025 11:00:00 -0000 https://status.ventrata.com/incident/571239#9eed2cfe38d0d922742b354073d48eed211980fa72377ae1ec05484fe452b2d3 We are currently aware of an issue affecting the Viator Self-Mapping functionality within the Ventrata interface. Some customers may encounter a “No active products” message when attempting to map or remap products via the Ventrata connections page, even though they have active products in their Viator accounts. 🛠️ Root Cause Identified This disruption is due to a recent change on Viator’s side (Supplier Administration Interface) that is temporarily preventing us from accessing product data and syncing it correctly into the Ventrata platform. ✅ Important Notes Bookings for already mapped products are not affected. The issue only impacts new mappings or remappings from within the Ventrata interface. Mapping via the Viator interface still works as a temporary workaround 🔄 Workaround Please use the Viator interface (Supplier Administration) to map products directly while we resolve the issue. 🚧 Fix in Progress Our engineering team is actively working on a resolution as a top priority, and we’ll share another update as soon as the issue is fixed. Thank you for your patience and understanding! Delay in transactional email delivery https://status.ventrata.com/incident/562453 Tue, 13 May 2025 16:13:00 -0000 https://status.ventrata.com/incident/562453#1572a528d91468b1e953f5a66b9606cb90ead070a60233753176cbdf01c9ebe5 The issue with transactional email delays has been fully resolved. All previously queued emails have now been successfully delivered, and the system is operating normally. Delay in transactional email delivery https://status.ventrata.com/incident/562453 Tue, 13 May 2025 15:13:00 -0000 https://status.ventrata.com/incident/562453#ba450a87e72a58df3a90616a43384a7b2b90b0ad1db116e109cbcc4c9d4ceffa New transactional emails are now being delivered without delays. However, some emails from earlier remain queued due to the previous issue. We are actively working to deliver all pending messages as quickly as possible. Delay in transactional email delivery https://status.ventrata.com/incident/562453 Tue, 13 May 2025 13:03:00 -0000 https://status.ventrata.com/incident/562453#c7ddf04de50174479249f1e5ffdc1affa53c385823da017e2e41d0e1a6b192b7 We are currently experiencing degraded performance with Ventrata's email system. As a result, all transactional emails are being delivered with delays. Our team is investigating and working to restore normal delivery times as soon as possible. Checkout degradation https://status.ventrata.com/incident/548611 Tue, 22 Apr 2025 10:54:00 -0000 https://status.ventrata.com/incident/548611#67fa9238e56621a0905cc7235893b33618098e37fe7d5a94320b71b98b65211a Our partner Adyen has released a Root Cause Analysis on their webpage, which you can find at the following link: https://www.adyen.com/knowledge-hub/mitigating-a-ddos-april-2025 Checkout degradation https://status.ventrata.com/incident/548611 Tue, 22 Apr 2025 10:54:00 -0000 https://status.ventrata.com/incident/548611#67fa9238e56621a0905cc7235893b33618098e37fe7d5a94320b71b98b65211a Our partner Adyen has released a Root Cause Analysis on their webpage, which you can find at the following link: https://www.adyen.com/knowledge-hub/mitigating-a-ddos-april-2025 Checkout degradation https://status.ventrata.com/incident/548611 Tue, 22 Apr 2025 06:16:00 -0000 https://status.ventrata.com/incident/548611#ad593699f9c8c52eb6e3b1369172acabacefecf1a62e3c2fde8ab20c6dcfa576 Our partner Adyen communicated that their Checkout services performance has stabilised since 02:16 CEST. At this point, all payment processing capabilities have returned to normal. https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Tue, 22 Apr 2025 06:16:00 -0000 https://status.ventrata.com/incident/548611#ad593699f9c8c52eb6e3b1369172acabacefecf1a62e3c2fde8ab20c6dcfa576 Our partner Adyen communicated that their Checkout services performance has stabilised since 02:16 CEST. At this point, all payment processing capabilities have returned to normal. https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 19:16:00 -0000 https://status.ventrata.com/incident/548611#e0c7ef40d54848dda78e931b89b55de462e29c92b46ece8fb5b123a6151997b1 Adyen is again experiencing issues. We will be updating as soon as there is more information. In the meantime, follow https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 19:16:00 -0000 https://status.ventrata.com/incident/548611#e0c7ef40d54848dda78e931b89b55de462e29c92b46ece8fb5b123a6151997b1 Adyen is again experiencing issues. We will be updating as soon as there is more information. In the meantime, follow https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 18:33:00 -0000 https://status.ventrata.com/incident/548611#6598354c3d3d68f237a6e506a419cb768a6a0d44b80f66378c5f6e6aba5d977d Adyen payment processing has now fully resolved and so has resolved our degradation. https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 18:33:00 -0000 https://status.ventrata.com/incident/548611#6598354c3d3d68f237a6e506a419cb768a6a0d44b80f66378c5f6e6aba5d977d Adyen payment processing has now fully resolved and so has resolved our degradation. https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 17:12:00 -0000 https://status.ventrata.com/incident/548611#5547b8f5a297dad6be83819f589840b0c49165a5be73114596b24bbe000b0125 Our partner Adyen has acknowledged an issue and they are working to resolve it. Follow their status page for more detailed updates: https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 17:12:00 -0000 https://status.ventrata.com/incident/548611#5547b8f5a297dad6be83819f589840b0c49165a5be73114596b24bbe000b0125 Our partner Adyen has acknowledged an issue and they are working to resolve it. Follow their status page for more detailed updates: https://status.adyen.com/ Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 17:03:00 -0000 https://status.ventrata.com/incident/548611#cc8a6261c8c53e62d9340bafa9b7f5cde1d00df3035e6385a1fb8313ec053fbc We’re investigating an issue with Adyen that is impacting some users. We’ll share another update shortly. Checkout degradation https://status.ventrata.com/incident/548611 Mon, 21 Apr 2025 17:03:00 -0000 https://status.ventrata.com/incident/548611#cc8a6261c8c53e62d9340bafa9b7f5cde1d00df3035e6385a1fb8313ec053fbc We’re investigating an issue with Adyen that is impacting some users. We’ll share another update shortly. Viator Self-Mapping Functionality Temporarily Unavailable https://status.ventrata.com/incident/545627 Fri, 18 Apr 2025 12:06:00 -0000 https://status.ventrata.com/incident/545627#9c0c0da9b20553d44a08f07aabcea624cf3d266db72fa136f5da70ca7be41825 Recent improvements have resolved the issue for most users. We’re continuing to monitor. If you’re still experiencing issues, please contact our support team via 24/7 live chat. Viator Self-Mapping Functionality Temporarily Unavailable https://status.ventrata.com/incident/545627 Fri, 18 Apr 2025 07:39:00 -0000 https://status.ventrata.com/incident/545627#5cfde9eddac97abb932bb06e8d0f05609a6d2439804a71c9a113adf4d16ecd43 Following recent updates, the situation has improved, but some suppliers may still see a “No active products” message or an incomplete product list when mapping or remapping Viator products via the Ventrata connection interface. This is due to a temporary sync issue. Bookings and already mapped products are not affected. New mappings can still be done directly from the Viator portal. Bookings and already mapped products are not affected. New mappings can still be completed directly from the Viator portal. As this issue does not impact sales, we expect to release a fix after the long weekend. We appreciate your patience in the meantime. Dashboard recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:33:58 +0000 https://status.ventrata.com/#baf614e381b595988f15451363ecea65bd55b2ac4d251296473dd70e260413d3 Dashboard recovered API recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:30:57 +0000 https://status.ventrata.com/#4f48252689aa0df5104d32218496dbf9ae95122292578361103fdc16fb895e80 API recovered API recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:30:57 +0000 https://status.ventrata.com/#4f48252689aa0df5104d32218496dbf9ae95122292578361103fdc16fb895e80 API recovered Dashboard went down https://status.ventrata.com/ Mon, 14 Apr 2025 16:25:56 +0000 https://status.ventrata.com/#baf614e381b595988f15451363ecea65bd55b2ac4d251296473dd70e260413d3 Dashboard went down Dashboard recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:23:53 +0000 https://status.ventrata.com/#c29990e4a5ea521acb9fb4faea28f595c6f8a5cac4f89edf26c7775e388771e1 Dashboard recovered API went down https://status.ventrata.com/ Mon, 14 Apr 2025 16:14:23 +0000 https://status.ventrata.com/#4f48252689aa0df5104d32218496dbf9ae95122292578361103fdc16fb895e80 API went down API went down https://status.ventrata.com/ Mon, 14 Apr 2025 16:14:23 +0000 https://status.ventrata.com/#4f48252689aa0df5104d32218496dbf9ae95122292578361103fdc16fb895e80 API went down Dashboard went down https://status.ventrata.com/ Mon, 14 Apr 2025 16:11:56 +0000 https://status.ventrata.com/#c29990e4a5ea521acb9fb4faea28f595c6f8a5cac4f89edf26c7775e388771e1 Dashboard went down Dashboard recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:03:02 +0000 https://status.ventrata.com/#e550f977a7725654475076e0dddd729b983387cf3b1b887dc93dcb2687e9ede6 Dashboard recovered CMS recovered https://status.ventrata.com/ Mon, 14 Apr 2025 16:01:55 +0000 https://status.ventrata.com/#393d2f05766f82310f3d6a8de491a4bf33feeef936154dd2fdadda6820a1cd92 CMS recovered API recovered https://status.ventrata.com/ Mon, 14 Apr 2025 15:59:58 +0000 https://status.ventrata.com/#1633c44116775aea6e21d0075590ccb1ded0ba3f2be1090a6458f4296e696a44 API recovered API recovered https://status.ventrata.com/ Mon, 14 Apr 2025 15:59:58 +0000 https://status.ventrata.com/#1633c44116775aea6e21d0075590ccb1ded0ba3f2be1090a6458f4296e696a44 API recovered CMS went down https://status.ventrata.com/ Mon, 14 Apr 2025 15:55:12 +0000 https://status.ventrata.com/#393d2f05766f82310f3d6a8de491a4bf33feeef936154dd2fdadda6820a1cd92 CMS went down API went down https://status.ventrata.com/ Mon, 14 Apr 2025 15:52:57 +0000 https://status.ventrata.com/#1633c44116775aea6e21d0075590ccb1ded0ba3f2be1090a6458f4296e696a44 API went down API went down https://status.ventrata.com/ Mon, 14 Apr 2025 15:52:57 +0000 https://status.ventrata.com/#1633c44116775aea6e21d0075590ccb1ded0ba3f2be1090a6458f4296e696a44 API went down Dashboard went down https://status.ventrata.com/ Mon, 14 Apr 2025 15:52:56 +0000 https://status.ventrata.com/#e550f977a7725654475076e0dddd729b983387cf3b1b887dc93dcb2687e9ede6 Dashboard went down CMS recovered https://status.ventrata.com/ Mon, 14 Apr 2025 12:28:21 +0000 https://status.ventrata.com/#28f6992d3544e32640872fb749075ae24eacbf074c84ce883db91a77b9eaf0d8 CMS recovered CMS went down https://status.ventrata.com/ Mon, 14 Apr 2025 12:21:51 +0000 https://status.ventrata.com/#28f6992d3544e32640872fb749075ae24eacbf074c84ce883db91a77b9eaf0d8 CMS went down CMS recovered https://status.ventrata.com/ Wed, 02 Apr 2025 09:28:20 +0000 https://status.ventrata.com/#11e303e7ecb283bac4d4e47fa0b0facaeaceda414d447cb277e9848320a4493b CMS recovered CMS went down https://status.ventrata.com/ Wed, 02 Apr 2025 09:22:44 +0000 https://status.ventrata.com/#11e303e7ecb283bac4d4e47fa0b0facaeaceda414d447cb277e9848320a4493b CMS went down