“Super Bowl LVI took place earlier this year, and I watched the game like many of you. But as the CEO of a real-time analytics platform provider, I have to confess I was more fascinated by the action off the field, says” Venkat Venkataramani, co-founder and CEO, Rockset. He focuses on the need for real-time analytics for building operational intelligence in this first part of a two-article series discussing the future of data-driven decisions.
Coaches and players huddled around Microsoft Surface tablets. The devices have been ubiquitous on NFL sidelines since debuting almost a decade ago. While some are still just watching instant replays, many others are leveraging the devices for something much more powerful: sophisticated real-time insights and recommendations to guide their game-day play calls.
Through the NFL’s analytics service, NextGenStats, every play outcome is instantly categorized, tagged and correlated with individual player location and performance data. That in-game data is also combined with ML models trained on decades of historic game and player statistics. Via publicly accessible APIs, coaches can tap into specific real-time predictions such as the likely success of a 2-point version, the probability of completing a specific pass, the expected yards gained for a rushing play, and more.
Besides directly consulting these smart digital playbooks, two-thirds of NFL coaches also call on their team’s analytics expert connected via headset to help them make data-driven on-field decisions during critical moments.
The Problem with Stale Business Intelligence
Who would have expected the NFL to put the average Fortune 500 corporation to shame when it comes to real-time data-driven decision-making? Sadly, it’s the truth.
Because when it comes to gathering and using data to improve their operations, most companies still use batch-based data warehouses to crunch datasets weeks or months after the fact.
In NFL terms, that would be like watching a game film on Mondays.
However, Monday morning quarterbacking no longer suffices in the hyper-competitive NFL. The same goes for stale business intelligence (BI) in today’s fast-paced, globally-connected economy.
As corporate operations become reliant on real-time data, so does their financial health. Businesses can no longer afford data bottlenecks or errors and the resulting downtime.
Slow is the new down.
Alerts – when they flag outages in progress – are too late.
To solve this, companies need comprehensive visibility into the real-time state of their mission-critical data and the ability to foresee and prevent potential problems.
Some call this business observability. Others like myself prefer the term operational intelligence. In this blog, I’ll use the terms interchangeably as I trace the history of two formerly distinct areas – systems management and business analytics – to show how they have come together today to support real-time data-driven decision making and mission-critical operations.
The Roots of Operational Intelligence
Viewing last quarter’s sales on a dashboard or report is classic BI – out-of-date, passive, and essentially useless for tactical decisions (and sometimes longer-term planning, too).
By contrast, operational intelligence combines both historic and up-to-the-second data around complex technology-driven business processes. Its data sources are typically broader than finance-oriented BI. Rather than plain, dumb monitoring, operational intelligence uses complex analytics to generate correlated insights into your operations’ current and future state.
This enables you to affect and improve any part of your operations in real-time, as well as ward off potential bottlenecks and failures. And whereas BI leverages a data warehouse and associated ETL/ELT tools, operational intelligence is produced by a scalable real-time analytics stack in the cloud.
Operational intelligence/business observability has its roots in the 1950s during the great Space Race between the U.S. and the Soviet Union. NASA scientists developed a method to monitor and predict the health of their rockets that they called observability.
Three decades later, companies began turning away from monolithic mainframe computers towards cheaper, more flexible mini-computers – the servers. In this pre-Internet client-server era, there were dozens of server makers, some running flavors of the Unix operating system and others with wholly proprietary platforms.
Software arose in the 1980s to help IT managers monitor and manage their diverse and fast-growing data centers and networks. They were known as IT Operations Management or systems management software. The Big Five platforms – CA Unicenter, IBM Tivoli, HP OpenView, BMC Patrol, and later, Microsoft System Center – all promised the deep visibility and control that an IT admin needed to manage hundreds of servers at a time. This capability they called system observability.
Application Observability and the Cloud
As the tech industry moved into the Internet era of the 1990s and the web/cloud era beginning in the 2000s, ITOM tools faded, replaced by a new generation of tools called application performance management, or APM. Upstarts such as Datadog, Dynatrace, New Relic, and AppDynamics (now Cisco) delivered the same holistic view of complex systems as their predecessors. The difference was that APM tools were aimed at DevOps and site reliability engineers (SREs), not IT or network admins, and they focused on helping to manage stacks of cloud and web applications. Hence, they adopted the term application observabilityfor their wares.
More recently, vendors like BigEye and Monte Carlo Data have embraced the term data observability to describe platforms providing scalable, real-time visibility into databases, data pipelines and data quality. Their offerings are aimed at data engineers and data ops teams.
Business observability, by contrast, provides a much broader view than its predecessors. Rather than just focusing on a single technology such as server hardware, web applications, or data warehouses, business observability and operational intelligence platforms help you manage modern, data-infused business operations, whatever the underlying technology.
This is more than just real-time alerts, which can initially help teams spot potentially catastrophic anomalies early but also creates an exhausting overload of false positives.
Through the Looking Glass of Real-time Data
By gathering and correlating multiple data points and running them through predictive ML models, state-of-the-art business observability delivers much more. It can filter through anomalies to block false alarms and prevent alert fatigue. It can provide instant answers to complex queries in many areas: cybersecurity, logistics, sales and any other domain that is key to your company’s bottom line.
Operational intelligence also helps workers uncover and fix the root causes of problems faster than ever and make decisions that maximize productivity, increase revenue and minimize downtime. It can also autonomically drive recommendation engines and other revenue-generating systems that operate too quickly for human decision-makers.
No wonder business observability users are diverse, including marketing operations teams, product engineers, sales teams, business operations, and customer support.
This article originally appeared in ToolBox.