The supercloud concept promises fewer accidental architectures and more cohesive cloud deployments with better manageability. Delivering on this vision requires a mix of vendor-agnostic tooling for performance monitoring and securing data.
Even though the concept has existed since 2017, it’s only recently attracted attention from industry pundits and technology vendors. If you talk to the vendors, supercloud is whatever they have on the truck to sell. Pundits, like the fine folks over at Wikibon, are still formulating an acceptable definition. Industry analysts, interestingly, are largely silent on supercloud today.
We have to start somewhere, however, so let’s start at the beginning. The supercloud concept was created by researchers at Cornell University in 2017. They defined supercloud as:
…a cloud architecture that enables application migration as a service across different availability zones or cloud providers. The Supercloud provides interfaces to allocate, migrate, and terminate resources such as virtual machines and storage and presents a homogeneous network to tie these resources together.
As you can see, supercloud emphasizes application portability across cloud environments. Incorporating networking and storage for VM migration is no small feat. The Cornell researchers rely on virtual machine (VM) layering to resolve a number of portability challenges. VM layering means running your target VM inside a host VM.
The proposed benefits of supercloud are:
Each of the above is promising. However, supercloud also introduces new complexity into these environments.
If we’re willing to accept the core idea of seamless application portability offered by supercloud, we need to take the next step. How do we live with this architecture? We still need to monitor everything, and now there’s more to monitor: from the application VMs, to the host VMs they’re running in, and all of the native cloud services that supercloud claims to abstract away.
This monitoring expansion massively increases the cost and complexity of monitoring – more data, from more places. Costs skyrocket in traditional monitoring environments, with agents pushing all of the logs, events, metrics, and traces to a centralized system of analysis. Remember, you can abstract away the application environment, but you can’t abstract away data egress costs. Moving data from one cloud to another is costly.
Let’s put the monitoring challenges aside for a moment. There’s a bigger dragon hiding in those superclouds – security. Today’s security environments span on-prem, edge, and one or more clouds. There are dozens, if not hundreds, of tools used across SOCs and cloud operations teams. Each tool requires unique skills to use effectively. Shifting to a supercloud architecture changes, well, none of that. Abstracted or not, the underlying infrastructure still needs securing by highly skilled staff.
Each cloud has its own security frameworks, which work within their own domains. If the supercloud is to live up to its promise, a cohesive security framework must be created that works across environments. We’ve seen some efforts around data standardization with projects like OCSF, but nothing as yet around process standardization. That will take a level of cooperation between public and private cloud providers that we haven’t seen before, and anything that blunts their control and power over locking in users is unlikely to happen.
Realizing the supercloud vision requires a few critical capabilities. First, we need to control movement of our observability data, i.e. the logs, metrics, events, and traces essential to understanding application performance and security posture. Next, we must be able to integrate the raft of security tools with the right data, in the right format, at the right time. Finally, we need to reach into any component of our supercloud for investigations, troubleshooting, and exploration. Let’s break down each.
Think of observability data like exhaust. It’s a byproduct of the applications that produce it. As a byproduct, not all of it is useful or valuable. Yet the legacy tools we’re using today force us to ship all of it for analysis, driving up costs for data egress, storage, and platform licensing. Worse, we can typically only send that data to one destination. If there are other destinations that could use it, we’re locked into a single silo.
An open observability pipeline sits between the supercloud sources and destinations, allowing us to route data to any destination in the expected format. We can also enrich data with additional context: Which cloud did this come from? Which country does this IP address reside within? Lastly, we can choose which data to move, and when. This gives operations teams far more control and flexibility over their data.
As a long-time data person, I know that integration is the whole game. And not just data integration, but process integration. We can’t wait years for a supercloud security framework to emerge. We have to use what we have today. In addition to controlling your data routing and shape, an observability pipeline also acts as a strategic integration hub linking together the full range of security tools your teams are using. This is frequently referred to as a cybersecurity mesh architecture, and its core purpose is integration.
Integration is more than just linking data. It requires inherent protocol and schema knowledge. That’s what an observability pipeline brings.
Open observability is more than just moving and shaping data. In the context of supercloud, we may not know the state of our applications, or even what data we can use to figure that out. Even if we could pay to move it, we don’t know what “it” is.
Open observability complements supercloud by allowing operators to seamlessly access containers, virtual machines, or even physical infrastructure to remotely debug and troubleshoot applications. By deploying an all-in-one adaptive agent alongside your application, you can access logs, interrogate running processes, and search data from across your fleet, regardless of where it might live in your supercloud implementation. This allows you to observe what’s happening across all of your infrastructure without moving all the data into high-cost analytics platforms.
The old joke that the cloud is just someone else’s computer also applies to supercloud. The difference is, with supercloud, you’re using everyone else’s computer. The challenges around monitoring, observing, and securing are compounded unless you have strong capabilities around observability data management. That’s where open observability comes in – putting you back in control.
If you want to learn more about open observability and how we build it at Cribl, check out our market-leading observability suite, available on prem, in the cloud, as part of a hybrid implementation, or in a supercloud near you.
Experience a full version of Cribl Stream and Cribl Edge in the cloud with pre-made sources and destinations.