Service Degradation
Resolved
Apr 15 at 12:15pm BST
At 16:52 and 17:25 GMT yesterday, we experienced a 33-minute period of service degradation that caused all channels to either become unresponsive or significantly slower. We've completed a detailed investigation and identified the root cause as a sharp spike in traffic volume driven by aggressive pre-emptive loading of our checkout widgets. In response, we've refined the loading logic and increased service capacity tenfold. With these changes in place, we do not expect a recurrence.
The pre-emptive loading was intended to improve speed by fetching product content and availability ahead of user interaction. However, requests were triggered too frequently—on every page load—and became particularly costly following the introduction of "calendar-first" views, which require heavier data queries. We've now tuned this logic to achieve the same performance benefits with a much lower request footprint.
This issue was exacerbated by record-breaking traffic in the lead-up to Easter, typically our highest-traffic period of the year. Notably, the additional capacity we've introduced would have absorbed even 8–9x the traffic seen during the incident without degradation.
Our infrastructure team is also conducting a broader review to identify early warning signals, improve traffic shaping, and introduce automated mitigation strategies to reduce the likelihood of similar incidents in the future.
Affected services
Updated
Apr 14 at 06:43pm BST
This incident has now been fully resolved. All systems are operating normally, and no further issues have been observed.
We appreciate your patience while we worked through this. A full post-mortem will be published shortly with more details on the root cause and the actions we've taken to prevent recurrence.
Thank you for bearing with us.
Affected services
Updated
Apr 14 at 06:14pm BST
The platform remains stable and fully operational for most users. Our team is continuing to investigate the root cause and closely monitor system performance. We’ll post further updates only if there are significant developments. Thank you for your continued patience and understanding.
Affected services
Updated
Apr 14 at 05:56pm BST
We're seeing stable performance across most of the platform, and most services are operating as expected. While the underlying issue has not been fully resolved, our mitigation steps have helped restore normal functionality for the majority of users. Our team continues to investigate the root cause and is closely monitoring the situation to ensure it remains stable.
Thank you for your continued patience - we’ll share more updates as we progress.
Affected services
Updated
Apr 14 at 05:46pm BST
We’ve identified a suspected root cause and are actively working to address it. Most services are now back to normal, though some users may still experience intermittent slowness or access issues. We’re continuing to monitor closely and will provide further updates as we make progress. Thank you for your patience.
Affected services
Updated
Apr 14 at 05:27pm BST
We're seeing signs of gradual recovery, and some users may notice improved performance. However, the issue is not yet fully resolved. Our team is continuing to investigate and monitor the situation closely to ensure full restoration of service. Further updates will follow. Thank you for your continued patience.
Affected services
Created
Apr 14 at 05:08pm BST
We are currently aware of an issue where some users are experiencing slowness and difficulty accessing Ventrata. Our team is actively investigating the root cause and working to resolve the issue as quickly as possible. We’ll provide updates as soon as more information becomes available. Thank you for your patience.
Affected services