While all eyes are trained on the gridiron during the SuperBowl, much of the action goes on behind the scenes to create the optimal viewing experience for fans.
Like the athletes on the field, the team at Hydrolix, which ran real-time, consolidated logging across multiple content delivery networks (CDNs) to support Fox’s content delivery during last year’s big game between the Philadelphia Eagles and the Kansas City Chiefs, had to be at peak performance.
Think it hurts to get hit by a 300-lb linebacker? Try being slammed by more than 200 terabytes of data coming in. That’s what Hydrolix had to handle during the 2025 Super Bowl.
The company already had lots of experience managing large livestreaming events, including a previous Super Bowl hosted by Paramount. The observability platform ingested, stored, and analyzed massive CDN log data, consolidating records from four CDNs into a single logical table.
Included in Hydrolix’s superstar lineup were AWS’s EKS platform, two of its own Kaiju clusters, and a query architecture that stretched across numerous pools of query responses, ensuring users or dashboards didn’t have any trouble from “noisy neighbors,” a.k.a. resource contention. And Hydrolix was able to deliver real-time, hyperscale observability, including real-time analytics, processing 55 billion records, and fielding 55,000queries, all while maintaining low latency and query response time plus a high degree of efficiency.
As fans pull together snacks and viewing parties set then settle in to watch Sunday’s Super Bowl pitting the Seattle Seahawks against the NewEngland Patriots, Tech-Channels spoke with Hydrolix Field CTO Karen Johnson about the challenges of running such a large live-streaming event and the lessons learned from last year’s contest.
Q. Last year’s Super Bowl was not Hydrolix’s first time to run a big event. What experience did you all bring to the table?
A. We have experienced running events like this before. We ran the Super Bowl for Paramount and we've done the Olympics. We took a lot of that knowledge into this, but this one was just much larger than anything that we had done before.
Q. The Super Bowl draws a lot of viewers and these days they watch through a variety of sources. That’s a lot of data to accommodate and it’s structured in multiple formats. How do you tackle getting a fix on what viewing traffic might be?
A. It’s a common problem across tech to create a really big burst of traffic that's synthetic that you can use to practice for an event. We get estimates from the customer. So from Fox, we're talking about the range of viewers they think they’re going to have. What they had the last time that they had hosted it. We try to get some numbers that way, and we know that we're dealing with something that's much larger than usual. The Super Bowl is one of the biggest events on the planet. And from a streaming perspective, we already know that it's going to be bigger than anything we see on a daily basis. There's not a way to simulate it either. We can’t just turn on the Super Bowl traffic simulator—that doesn't exist.
Q. How did you prepare for traffic volumes that would likely exceed your expectations?
A. Fortunately, Fox hosted two playoff games. So, we did get information from running those playoff games. In the first playoff game we did great.Everything went super smoothly. All the prep work that we had done was amazing. For the second playoff game, we actually struggled because the environment had changed. Fox has different sets of streaming infrastructure and they're configured a little bit differently. The first playoff game was done on another streaming infrastructure, and the data was wildly different than what it was in the previous playoff game. Fortunately, we had built a lot of credibility in handling that first playoff and there was a very clear explanation of [why we had issues] with the second game.
Q. How did you apply the lessons learned from those two playoff games.
A. After playoff Game Two, we modified quite a bit to make it look more like the first playoff game, for the data to look more like that. We could see that that a particular field was what crushed us. We had to figure out how to reduce that and just pull out the data pieces needed, and how the data needed to be viewed. We didn’t need the part that was really causing the trouble. We needed the segment that that was being looked at and not the whole user token. We were really successful in the first playoff game. How do we make it look more like that?
Q. Hydrolix had run a Super Bowl for Paramount, how did the Fox event differ?
A. There were two big things. Our software evolves. So settings that worked l last year may or may not work this year based on the traffic. And the really the big wild card with Fox was that they were streaming the Super Bowl. There was a login wall, but it didn't have a pay wall. So, it was wide open. People could sign up that day, not pay and log in. That was the big thing. We extrapolated and planned and understood if it's really clear from the beginning who's going to win, viewership will drop off. But if there's a big turnover in the game, we're going to see traffic go up dramatically. So we prepared for about two times the traffic of what we got just to be ready for that scenario if we suddenly got a ton of viewers.
Q. What were Fox’s expectations?
A. Fox was mainly worried about viewer experience. Social media was ablaze with how bad viewing the Netflix Mike Tyson/Jake Paul fight was. Fox was looking at that. They also have contracts with the different CDNs about how much capacity they've preserved. So they're watching to see if the CDNs are staying within their capacity limits because they'll have to pay overages if they go over. They can do steering, moving traffic one direction or the other.That has happened with other sporting events for geographies that are having particular issues, or segments of the viewers that seem to be having more problems than others. We were watching live our infrastructure to make sure that we were staying within comfortable limits for us and not getting up to the edge. We actually did that the whole game—we didn't get to a point where we got super nervous, but that was because of all the prep that we did ahead.
Q. How long did it take you to prep for the Super Bowl?
A. We started in earnest in August, specifically looking at were the CDNlogs. The CDN logs have every single request, a ton of detail. Fox started with five different CDNs. One fell out in the process of getting a Super Bowl, butwe had to go through and normalize all that data, because Fox wanted to do analytics across it. And every vendor has a different format. Every vendor hasa different set of fields. They send different information. They have their own special things that they provide that nobody else provides. So, we had to come up with a common set of things we could extract from all of those.
Q. That sounds intense.
A. There’s some work to do around that. It's not just naming problems, but one sends response time in seconds, and another one sends it in milliseconds. That was the functional piece of it. After we did that and we started driving some traffic, we want to know if it the data we captured was right,we were doing the right things to the data, and validate, if we were getting all of the data. During that first playoff game, we went back and compared to the sources—logs coming from Akamai or Fastly or [other CDNs] to see whether it matched what they think. None of the CDM vendors guarantee log delivery. It's not a transaction, so you have to be okay with some small percentage of difference. We had to go through these steps of validating the data, querying it to see if there’s a problem or if we wanted to query it in a very different way we didn't accommodate for. And then at scale, did we get all the data, the right data?
Q. And did that level of prep give Fox confidence that trafficcould be accommodated during the Super Bowl?
A. What was cool is that we did gain enough credibility with them that instead of the questions being pointed at us to show our numbers were right, the questions turned to the CDN vendors about why their numbers didn’t match ours. And Fox did load tests. We had a configuration that worked well, scaled twice for what we needed simply because we didn't know how many viewers there would be.
Q. How concerned were you guys that some bad actor would come inand foul things up?
A. Our stuff is fairly hardened. We were assuming Fox’s was as well—they have really stringent security requirements. The streaming companies tend to be concerned about whether a bad actor is reviewing a stream and there piracy. That wasn't a part of what we were monitoring for this in particular event. Of course, anything that could have happened, but it was not something that was so high on our list of priorities that we were planning for it. We had taken precautions to prevent something from happening. Because we get every single piece of data, if there's something like a DDoS attack, where you get a big chunk of traffic coming at you, we actually handle it really well because of how we scale. We can gather all that information and block IPs really quickly. We could gather all the logs so they could analyze them in real time. If it was something more subtle, where somebody's tokens are being shared, we gather enough data so Fox can look at that after the event.
Q. How does this compare to traditional observability?
A. There are observability solutions, where you set up an agent, you have all these dashboards out of the box, but they're not typically intended for this scale of traffic. They're typically intended for an application that's running on top of a database and those logs also have a time-bound value to them. After a few days, they're not necessarily useful anymore at a granular scale. Our company was founded on the premise of being able to observe CDn logs, so we went for scale really quickly. And that's where other observed observability solutions haven't really caught up to where we are. We love a whole lot of bursty traffic; we can handle that all day long. When Fox selected us for observability, we grabbed all the data from different CDNs and provided dashboards that were targeted for what they wanted to monitor during the event, and we collaborated with them on that. We didn't have out-of-the-box dashboards for specifically what they were doing, but we collaborated to get them built. And we were monitoring the traffic, and not the backend applications. Fox was also using AWS for transcoding, so we were watching some of those logs as well, just to see what was happening there, to correlate to the traffic.
As the Super Bowl continues to set new records for digital viewership, the real win happens behind the scenes—where preparation, scale, and real-time insight determine whether fans ever notice the complexity at all. For Hydrolix and Fox, months of planning, hard-earned experience, and hyperscale observability ensured that when the game was on the line, the technology and the viewers never missed a play.