Every new application-delivery platform needs a killer app. Could CDNs be the killer app that pulls the Cloud out of the data center and into routing centers, central offices, and head-ends of Telcos and MSOs?
Network operators are poised to deploy CDNs—or more accurately, caching and request redirection technology that can be used to support customer-facing CDN services and optimize the delivery of Over-the-Top content—throughout their network infrastructure. They are doing this to address the increasing amount of video their networks carry.
What is this caching and redirection technology? It’s software running on general-purpose processors that have been configured with significant amounts of storage. By deploying a CDN, operators will also be deploying compute and storage capacity deep in their networks. This has the potential to form the hardware foundation for extending the Cloud to the network edge, begging the question: What new revenue generating services could such a platform allow network operators to offer?
The obvious place to start is dynamic content. Allowing content providers to factor out dynamic page construction and place it on the very same edge nodes that are caching the static images that appear in those pages—while keeping backend databases in the data center or on the customer’s premises—seems like a sure win. The low-hanging fruit is to move dynamic site acceleration (DSA) to the edge. (See Google’s Page Speed for a summary of the well-known techniques.) This will be especially important for mobile networks, where reducing traffic over the wireless link is critically important. Beyond generic DSA, the edge is a natural place to enhance the end-user experience, for example, social networking applications could benefit from edge acceleration.
But social network applications are really an instance of the more general strategy of moving application logic to the network edge, which in turn implies that the edge nodes must provide an isolated execution environment in which different customer applications can run. While defining a specialized (restrictive) “edge” environment is tempting, offering business customers access to general-purpose virtual machines (VM) gives them a natural way to extend the reach of data center based Clouds they are already starting to exploit.
Supporting general-purpose VMs on edge caching nodes is not speculative. For example, Verivue’s content delivery solution isolates each building block service in its own VM—the caching service runs in one VM, the redirection service runs in a second VM, a legacy streaming service runs in a third VM, a transparent caching service runs in a fourth VM, and so on. Without changing the hardware deployment, existing VMs can be re-provisioned to support changing requirements, and new VMs can be instantiated to run new services. In other words, when the CDN already runs on a Cloud platform, by deploying the CDN throughout its network, the network operator has effectively deployed a full-featured Cloud (not just the hardware foundation for a Cloud) in their network.
So what other services might such a Cloud support? It’s instructive to consider other services currently being deployed in data centers—Facebook, Google docs, multiplayer games—and ask if any of them would benefit from wider distribution and even closer proximity to end-users. One can make a latency argument for highly interactive applications, but beyond simple latency, a case based on a scaling argument seems to have the most promise. This makes gaming applications, in particular, a promising candidate. And to generalize a bit, any application that collects “sensor” data at the edge of the network and sends it back towards the center for analysis would benefit from the ability to perform reduction functions on the raw date at the network edge.
Another answer is to look to the set of new network services coming out of the research community. This is the ancestry of Verivue’s caching technology, which is based on CoBlitz, a one-time Princeton research project that evolved and was hardened on PlanetLab: a global test-bed for deploying and evaluating experimental network services, ranging from robust and scalable naming services, to publish-subscribe systems, to new algorithms for peer-to-peer networks, to resilient routing overlays, to Internet monitoring services. All of these services benefit from having multiple points-of-presence and close proximity to end-users—exactly what an access network provides.
Time will tell, but history teaches us wherever general-purpose compute and storage resources become available in new environments, innovative people will find a way to take advantage of them. I’m anxious to hear what applications others think a Cloud that extends to the edge of the network might catalyze.