On March 1, from 7:52 PM to 8:02 PM Central Time (CT), the network experienced intermittent service interruptions across consumer-facing applications, including interruptions to Scorekeeper’s ability to keep score.
The incident was caused by database lock contention on scoring-critical records during concurrent write traffic. As contention increased, request queues grew and database connection capacity was exhausted, resulting in broad service degradation.
Engineering, Support, and DBA teams responded quickly, identified the blocking session chain, and terminated long-running blockers. Customer-visible impact ended at 8:02 PM CT, Support confirmed recovery at approximately 8:11 PM CT, and full backend stabilization was completed by 9:15 PM CT.
We are first putting added safeguards in place, starting with a monitored circuit breaker, and fixing the specific requests that triggered the slowdown to reduce the chance of this happening again.